H-94
MA · State · USA
MA
USA
● Pre-filed
Proposed Effective Date
2025-07-07
An Act to ensure accountability and transparency in artificial intelligence systems (House No. 94, 194th General Court)
Establishes Chapter 93M of the Massachusetts General Laws, imposing accountability and transparency obligations on developers and deployers of AI systems, with heightened requirements for high-risk AI systems that materially influence consequential decisions in employment, housing, healthcare, lending, insurance, education, and government services. Developers must exercise reasonable care to mitigate algorithmic discrimination, provide deployers with documentation on intended uses, risks, and training data, notify the AG and deployers of discrimination risks within 90 days, and publish a public summary on their website. Deployers of high-risk AI must maintain NIST-aligned risk management programs, conduct annual impact assessments, notify consumers of AI-influenced consequential decisions, provide explanations and appeal processes, and publicly disclose their high-risk AI systems. Corporations using AI to target consumers or influence behavior must make additional disclosures. Enforced exclusively by the Attorney General under Chapter 93A; no private right of action. Exemptions exist for small businesses under 50 employees, low-risk AI systems, and entities subject to equivalent federal regulation.
Summary

Establishes Chapter 93M of the Massachusetts General Laws, imposing accountability and transparency obligations on developers and deployers of AI systems, with heightened requirements for high-risk AI systems that materially influence consequential decisions in employment, housing, healthcare, lending, insurance, education, and government services. Developers must exercise reasonable care to mitigate algorithmic discrimination, provide deployers with documentation on intended uses, risks, and training data, notify the AG and deployers of discrimination risks within 90 days, and publish a public summary on their website. Deployers of high-risk AI must maintain NIST-aligned risk management programs, conduct annual impact assessments, notify consumers of AI-influenced consequential decisions, provide explanations and appeal processes, and publicly disclose their high-risk AI systems. Corporations using AI to target consumers or influence behavior must make additional disclosures. Enforced exclusively by the Attorney General under Chapter 93A; no private right of action. Exemptions exist for small businesses under 50 employees, low-risk AI systems, and entities subject to equivalent federal regulation.

Enforcement & Penalties
Enforcement Authority
The Attorney General has exclusive authority to enforce this Chapter. Violations are deemed unfair or deceptive trade practices under Chapter 93A. An affirmative defense is available if the developer or deployer identifies and remedies violations through testing, internal review, or consumer feedback, and demonstrates compliance with recognized AI risk management standards. No private right of action is created for consumers.
Penalties
Violations are deemed unfair or deceptive trade practices under Chapter 93A, which provides for injunctive relief, civil penalties, and other remedies available to the Attorney General under that chapter. The bill does not specify independent statutory damages or penalty amounts; remedies flow from the existing Chapter 93A enforcement framework.
Who Is Covered
Developer: An entity or individual developing, modifying, or making AI systems available in Massachusetts.
Deployer: An entity using AI systems to make decisions impacting consumers in Massachusetts.
What Is Covered
High-Risk Artificial Intelligence System: AI systems that materially influence consequential decisions, including but not limited to: (a) Education opportunities; (b) Employment decisions; (c) Financial or lending services; (d) Housing access; (e) Healthcare services; (f) Insurance decisions; (g) Legal or government services.
Compliance Obligations 10 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Developer · Automated Decisionmaking
Chapter 93M, Section 2(a)
Plain Language
Developers of AI systems available in Massachusetts must exercise reasonable care to identify, mitigate, and disclose risks of algorithmic discrimination. This is a general duty of care — not limited to high-risk systems — requiring developers to proactively assess whether their systems produce differential treatment or impact across a broad list of protected characteristics. The duty encompasses identification, mitigation, and disclosure, making it a continuing obligation throughout the system lifecycle.
Statutory Text
(a) Duty of Care: Developers must use reasonable care to identify, mitigate, and disclose risks of algorithmic discrimination.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Chapter 93M, Section 2(b)
Plain Language
Developers must furnish deployers with documentation covering three areas: (1) a summary of the system's intended and foreseeable uses, (2) known limitations and risks, specifically including algorithmic discrimination risks, and (3) information about training datasets and bias mitigation measures applied. This is a pre-deployment downstream disclosure obligation — deployers cannot comply with their own impact assessment and risk management obligations without this documentation from developers.
Statutory Text
(b) Documentation Requirements: Developers must provide deployers with: (1) A summary of intended and foreseeable uses of the AI system; (2) Known limitations and risks, including algorithmic discrimination; (3) Information on the datasets used for training, including measures taken to mitigate biases.
R-01 Incident Reporting · R-01.3 · Developer · Automated Decisionmaking
Chapter 93M, Section 2(c)
Plain Language
When a developer discovers — or should foresee — that an AI system poses risks of algorithmic discrimination, the developer must notify both the Attorney General and all deployers within 90 days. This is a triggered reporting obligation: the 90-day clock runs from discovery, not from deployment. The obligation covers both known risks (actual discovery) and foreseeable risks, which broadens the trigger beyond confirmed discrimination to encompass risks a reasonable developer should anticipate.
Statutory Text
(c) Disclosure of Risks: Developers must notify the Attorney General and deployers of any known or foreseeable risks of discrimination within 90 days of discovery.
G-02 Public Transparency & Documentation · G-02.4 · Developer · Automated Decisionmaking
Chapter 93M, Section 2(d)
Plain Language
Developers must publish on their website a plain-language public summary covering the types of AI systems they develop, the measures they take to mitigate algorithmic discrimination, and contact information for public inquiries. This is a standing public transparency obligation — the summary must be accessible to the general public, not just deployers or regulators.
Statutory Text
(d) Public Statement: Developers must publish a plain-language summary on their website, detailing: (1) Types of AI systems they develop; (2) Measures to mitigate algorithmic discrimination; (3) Contact information for inquiries.
G-01 AI Governance Program & Documentation · G-01.1 · Deployer · Automated Decisionmaking
Chapter 93M, Section 3(a)
Plain Language
Deployers of high-risk AI systems must establish and maintain a formal risk management program that identifies and mitigates known or foreseeable risks of algorithmic discrimination and aligns with industry standards such as the NIST AI Risk Management Framework. This is a continuing obligation — the program must be maintained, not merely created. NIST AI RMF alignment is cited as a benchmark but the provision uses 'such as,' suggesting it is illustrative rather than an exclusive safe harbor. The AG has rulemaking authority under Section 7 to designate recognized frameworks.
Statutory Text
(a) Risk Management Policy: Deployers of high-risk AI systems must implement and maintain a risk management program that: (1) Identifies and mitigates known or foreseeable risks of algorithmic discrimination; (2) Aligns with industry standards, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.8 · Deployer · Automated Decisionmaking
Chapter 93M, Section 3(b)
Plain Language
Deployers must conduct an annual impact assessment for every high-risk AI system they operate, covering the system's purpose and intended use, the data categories it processes and outputs it generates, and potential discrimination risks along with mitigation measures. Impact assessments must also be updated whenever a substantial modification is made to the system — this is in addition to the annual cadence, not a substitute for it. The state will provide templates to standardize and reduce the compliance burden. Note that the AG has rulemaking authority under Section 7 to further define impact assessment requirements.
Statutory Text
(b) Impact Assessments: (1) Deployers must complete an annual impact assessment for each high-risk AI system, including: (i) The purpose and intended use of the system; (ii) Data categories used and outputs generated; (iii) Potential risks of discrimination and mitigation measures. (2) Impact assessments must be updated after any substantial modification to the system. State-provided templates for these assessments will be made available to reduce compliance burdens.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.3H-01.5 · Deployer · Automated Decisionmaking
Chapter 93M, Section 3(c)
Plain Language
When an AI system materially influences a consequential decision about a consumer, deployers must: (1) notify the consumer that AI was involved, (2) explain the system's purpose and how it influenced the specific decision, and (3) provide a process for the consumer to appeal or correct adverse decisions. The notification trigger is 'material influence' on a consequential decision — meaning the AI system determines or heavily weighs inputs that directly affect the outcome. This bundles three distinct consumer rights: pre/at-decision notice, explanation, and appeal. The appeal process must cover both adverse decisions (reversal) and corrections (data accuracy).
Statutory Text
(c) Consumer Protections: Deployers must: (1) Notify consumers when an AI system materially influences a consequential decision; (2) Provide consumers with: (i) The purpose of the system; (ii) An explanation of how the system influenced the decision; (iii) A process to appeal or correct adverse decisions.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Chapter 93M, Section 3(d)
Plain Language
Deployers must publicly disclose what types of high-risk AI systems they operate and how they mitigate the associated risks. This is a standing public transparency obligation distinct from the deployer-facing documentation developers must provide under Section 2(b) and from the impact assessments under Section 3(b). The provision does not specify the format or publication location, though the AG has rulemaking authority under Section 7 to elaborate on requirements.
Statutory Text
(d) Transparency: Deployers must publicly disclose the types of high-risk AI systems in use and their risk mitigation strategies.
G-02 Public Transparency & Documentation · Deployer · Automated DecisionmakingGeneral Consumer App
Section 4(a)-(b)
Plain Language
Any corporation operating in Massachusetts that uses AI to target consumer groups or influence consumer behavior must disclose the methods, purposes, and contexts of that targeting, the specific ways AI is designed to influence behavior, and details of third-party entities involved in the design, deployment, or operation of those AI systems. These disclosures must be posted publicly on the corporation's website in an accessible format and included in terms and conditions provided to consumers before significant interaction with an AI system. Proprietary information is protected under state confidentiality laws. This Section 4 obligation is broader than Section 3's high-risk system focus — it applies to any AI used for consumer targeting or behavioral influence, regardless of whether it qualifies as high-risk.
Statutory Text
(a) Disclosure of AI Use: Any corporation operating in Massachusetts that uses artificial intelligence systems or related tools to target specific consumer groups or influence behavior must disclose: (1) Purpose of AI Use: The methods, purposes, and contexts in which AI systems are used to identify or target specific classes of individuals; (2) Behavioral Influence: The specific ways in which AI tools are designed to influence consumer behavior; (3) Third-Party Partnerships: Details of any third-party entities involved in the design, deployment, or operation of AI systems used for targeting or behavioral influence. Proprietary information will be safeguarded and exempt from public disclosure under state confidentiality laws. (b) Public Disclosure Requirements: Corporations must make these disclosures: (1) Publicly available on their website in a manner that is easily accessible and comprehensible; (2) Included in terms and conditions provided to consumers prior to significant interaction with an AI system.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated DecisionmakingGeneral Consumer App
Section 4(c)
Plain Language
Consumers must be notified when AI systems are targeting or influencing them in ways that materially impact their decisions, and when algorithms are used to determine pricing, eligibility, or access to services. This is a real-time notification obligation separate from the general public website disclosure in Section 4(a)-(b). It applies to any corporation using AI for targeting or behavioral influence — not limited to high-risk AI systems. The pricing and eligibility trigger is notable: algorithmic pricing and eligibility determinations always require consumer notification regardless of whether they rise to the level of a 'consequential decision' under Section 1(4).
Statutory Text
(c) Consumer Notification: Consumers must be notified when: (1) They are being targeted or influenced by AI systems in a way that materially impacts their decisions; (2) Algorithms are used to determine pricing, eligibility, or access to services.