H-97
MA · State · USA
MA
USA
● Pre-filed
Proposed Effective Date
2025-07-17
An Act protecting consumers in interactions with artificial intelligence systems (House No. 97, 194th General Court)
Massachusetts H 97 imposes obligations on developers and deployers of high-risk AI systems — defined as AI systems that make or substantially factor into consequential decisions affecting education, employment, financial services, government services, healthcare, housing, insurance, or legal services. Developers must exercise reasonable care to prevent algorithmic discrimination, provide detailed documentation to deployers, publish a public use case inventory, and report discovered discrimination risks to the attorney general and deployers within 90 days. Deployers must implement iterative risk management programs, complete annual impact assessments, notify consumers before AI-informed consequential decisions are made, provide explanations and appeal rights for adverse decisions, and publish summaries of deployed systems. The attorney general has exclusive enforcement authority; violations are treated as unfair trade practices under Chapter 93A. The bill includes a rebuttable presumption of compliance for entities following its requirements, an affirmative defense for entities that discover and cure violations through testing or internal review while following recognized frameworks like the NIST AI RMF, and a small-deployer exemption for entities with fewer than 50 employees that do not use their own data to train the system.
Summary

Massachusetts H 97 imposes obligations on developers and deployers of high-risk AI systems — defined as AI systems that make or substantially factor into consequential decisions affecting education, employment, financial services, government services, healthcare, housing, insurance, or legal services. Developers must exercise reasonable care to prevent algorithmic discrimination, provide detailed documentation to deployers, publish a public use case inventory, and report discovered discrimination risks to the attorney general and deployers within 90 days. Deployers must implement iterative risk management programs, complete annual impact assessments, notify consumers before AI-informed consequential decisions are made, provide explanations and appeal rights for adverse decisions, and publish summaries of deployed systems. The attorney general has exclusive enforcement authority; violations are treated as unfair trade practices under Chapter 93A. The bill includes a rebuttable presumption of compliance for entities following its requirements, an affirmative defense for entities that discover and cure violations through testing or internal review while following recognized frameworks like the NIST AI RMF, and a small-deployer exemption for entities with fewer than 50 employees that do not use their own data to train the system.

Enforcement & Penalties
Enforcement Authority
The attorney general has exclusive authority to enforce this chapter. Enforcement is agency-initiated. Violations constitute unfair trade practices under Chapter 93A. An affirmative defense is available if the entity discovers and cures a violation through feedback, adversarial testing, red teaming, or internal review, and is otherwise in compliance with the NIST AI RMF, ISO/IEC 42001, or another recognized risk management framework. No private right of action is created — the statute expressly provides that it does not provide the basis for, and is not subject to, a private right of action for violations of this chapter or any other law.
Penalties
Violations constitute unfair trade practices under Chapter 93A, which provides the attorney general with authority to seek injunctive relief, civil penalties up to $5,000 per violation, and restitution. The statute itself does not specify separate penalty amounts — remedies are those available under Chapter 93A enforcement actions. No private right of action and no statutory damages provision.
Who Is Covered
"Deployer" means a person doing business in this state that deploys a high-risk artificial intelligence system.
"Developer" means a person doing business in this state that develops or intentionally and substantially modifies an artificial intelligence system.
What Is Covered
"High-risk artificial intelligence system" means any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision. "High-risk artificial intelligence system" does not include: (1) an artificial intelligence system if the artificial intelligence system is intended to: (i) perform a narrow procedural task; or (ii) detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review; or (2) the following technologies, unless the technologies, when deployed, make, or are a substantial factor in making, a consequential decision: (i) anti-fraud technology that does not use facial recognition technology; (ii) anti-malware; (iii) anti-virus; (iv) artificial intelligence-enabled video games; (v) calculators; (vi) cybersecurity; (vii) databases; (viii) data storage; (ix) firewall; (x) internet domain registration; (xi) internet website loading; (xii) networking; (xiii) spam- and robocall-filtering; (xiv) spell-checking; (xv) spreadsheets; (xvi) web caching; (xvii) web hosting or any similar technology; or (xviii) technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.
Compliance Obligations 13 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.2H-02.3 · Developer · Automated Decisionmaking
Ch. 93M § 2(a)-(b)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks arising from intended and contracted uses. They must provide deployers with comprehensive documentation including: foreseeable uses and misuses, training data summaries, known limitations and discrimination risks, pre-deployment evaluation methods for bias, data governance measures, intended outputs, mitigation steps taken, and guidance on human monitoring. A rebuttable presumption of reasonable care applies in AG enforcement actions if the developer complied with these requirements. Trade secrets and information protected by law need not be disclosed.
Statutory Text
(a) Not later than 6 months after the effective date of this act, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought not later than 6 months after the effective date of this act, by the attorney general pursuant to section 6, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 7. (b) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (1) a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (2) documentation disclosing: (i) high-level summaries of the type of data used to train the high-risk artificial intelligence system; (ii) known or reasonably foreseeable limitations of the high-risk artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system; (iii) the purpose of the high-risk artificial intelligence system; (iv) the intended benefits and uses of the high-risk artificial intelligence system; and (v) all other information necessary to allow the deployer to comply with the requirements of section 3; (3) documentation describing: (i) how the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (ii) the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of the high-risk artificial intelligence system; (iv) the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and (v) how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (4) any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of algorithmic discrimination. (f) nothing in subsections (b) to (e) of this section requires a developer to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Ch. 93M § 2(c)
Plain Language
Developers must provide deployers with documentation — such as model cards, dataset cards, or impact assessments — sufficient for deployers to complete the impact assessments required under Section 3(c). This is a feasibility-qualified obligation. A developer that also serves as the deployer for a system is exempt from generating this documentation unless the system is also provided to an unaffiliated deployer.
Statutory Text
(c) (1) except as provided in subsection (f) of this section, a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system not later than 6 months after the effective date of this act, shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to section 3 (c). (2) a developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.
G-02 Public Transparency & Documentation · G-02.4 · Developer · Automated Decisionmaking
Ch. 93M § 2(d)
Plain Language
Developers must publish on their website or in a public use case inventory a clear statement summarizing: (1) the types of high-risk AI systems they develop or substantially modify and make available, and (2) how they manage algorithmic discrimination risks for those systems. This statement must be kept current and updated within 90 days of any intentional and substantial modification. Note that the definition of 'intentional and substantial modification' excludes continuous learning changes that were predetermined and documented in the initial impact assessment.
Statutory Text
(d) (1) Not later than 6 months after the effective date of this act, a developer shall make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (ii) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with subsection (d)(1)(i) of this section. (2) a developer shall update the statement described in subsection (d)(1) of this section: (i) as necessary to ensure that the statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in subsection (d)(1)(i) of this section.
R-01 Incident Reporting · R-01.3 · Developer · Automated Decisionmaking
Ch. 93M § 2(e)
Plain Language
When a developer discovers — through its own testing or a credible deployer report — that its high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination, the developer must notify the attorney general and all known deployers or other developers within 90 days. The disclosure must describe the known or reasonably foreseeable discrimination risks. This is a dual-trigger obligation: it applies both when the developer self-discovers and when it receives a credible external report.
Statutory Text
(e) Not later than 6 months after the effective date of this act, a developer of a high-risk artificial intelligence system shall disclose to the attorney general, in a form and manner prescribed by the attorney general, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which: (1) the developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (2) the developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated Decisionmaking
Ch. 93M § 2(g)
Plain Language
The attorney general may at any time request that a developer produce the documentation described in Section 2(b) — including training data summaries, bias evaluation methodology, data governance measures, and intended uses — within 90 days. The documentation is exempt from public records disclosure. Developers may designate materials as containing proprietary information or trade secrets, and producing privileged materials does not waive attorney-client privilege or work-product protection.
Statutory Text
(g) Not later than 6 months after the effective date of this act, the attorney general may require that a developer disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the statement or documentation described in subsection (b) of this section. The attorney general may evaluate such statement or documentation to ensure compliance with this chapter, and the statement or documentation is not subject to disclosure under the "Massachusetts Public Records Law", chapter 66, section 10 of the General Laws. In a disclosure pursuant to this subsection (g), a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
H-02 Non-Discrimination & Bias Assessment · H-02.3 · Deployer · Automated Decisionmaking
Ch. 93M § 3(a)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks. A rebuttable presumption of compliance applies in attorney general enforcement actions if the deployer complied with all requirements of this section plus any AG-promulgated rules. This establishes the overarching deployer duty — the specific compliance mechanisms are detailed in Sections 3(b)-(e).
Statutory Text
(a) Not later than 6 months after the effective date of this act, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought not later than 6 months after the effective date of this act, by the attorney general pursuant to section 6, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 7.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Deployer · Automated Decisionmaking
Ch. 93M § 3(b)
Plain Language
Deployers must implement a documented risk management policy and program governing their deployment of high-risk AI systems. The program must identify, document, and mitigate algorithmic discrimination risks using defined principles, processes, and personnel. It must be iterative, regularly and systematically reviewed and updated over the system lifecycle. Reasonableness is assessed against the NIST AI RMF, ISO/IEC 42001, or other recognized frameworks (or AG-designated frameworks), the deployer's size and complexity, the nature of deployed systems, and data sensitivity and volume. A single program may cover multiple systems. A small-deployer exemption applies under Section 3(f) for deployers with fewer than 50 employees that do not use their own data to train the system.
Statutory Text
(b) (1) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (b) must be reasonable considering: (i) (A) the guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (B) any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer. (2) a risk management policy and program implemented pursuant to subsection (b)(1) of this section may cover multiple high-risk artificial intelligence systems deployed by the deployer.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.8H-02.10 · Deployer · Automated Decisionmaking
Ch. 93M § 3(c)
Plain Language
Deployers must complete a comprehensive impact assessment for each high-risk AI system before deployment and at least annually thereafter, plus within 90 days of any intentional and substantial modification. The assessment must cover: system purpose and use cases, algorithmic discrimination risk analysis with mitigation steps, data input/output categories, customization data, performance metrics, transparency measures, and post-deployment monitoring. A single assessment may cover a comparable set of systems. Impact assessments completed under other applicable laws count if reasonably similar in scope. All assessments and records must be retained for at least three years after final deployment. Additionally, deployers must conduct at least annual reviews to affirmatively verify each system is not causing algorithmic discrimination. The small-deployer exemption under Section 3(f) applies.
Statutory Text
(c) (1) except as provided in subsections (c)(4), (c)(5), and (f) of this section: (i) a deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system not later than 6 months after the effective date of this act, shall complete an impact assessment for the high-risk artificial intelligence system; and (ii) Not later than 6 months after the effective date of this act, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. (2) an impact assessment completed pursuant to this subsection (c) must include, at a minimum, and to the extent reasonably known by or available to the deployer: (i) a statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (iii) a description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (iv) if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (v) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vi) a description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and (vii) a description of the post-deployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system. (3) in addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (c) following an intentional and substantial modification to a high-risk artificial intelligence system not later than 6 months after the effective date of this act, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system. (4) a single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) if a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection (c) if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection (c). (6) a deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection (c), all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system. (7) Not later than 6 months after the effective date of this act, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.2H-01.3H-01.4H-01.5 · Deployer · Automated Decisionmaking
Ch. 93M § 3(d)
Plain Language
Before making a consequential decision about a consumer using a high-risk AI system, the deployer must: (1) notify the consumer that AI will be used, (2) disclose the system's purpose, the nature of the decision, deployer contact information, and a plain-language description of the system, and (3) inform the consumer about opt-out rights for profiling. If the decision is adverse, the deployer must additionally provide: the principal reasons for the decision (including the AI system's contribution, data types used, and data sources), an opportunity to correct incorrect personal data, and an appeal mechanism that includes human review where technically feasible. All notices must be provided directly, in plain language, in all languages the deployer uses in its business, and in disability-accessible formats. If direct delivery is impossible, a method reasonably calculated to reach the consumer is acceptable.
Statutory Text
(d) (1) Not later than 6 months after the effective date of this act, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; (ii) provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by subsection (5)(a) of this section; and (iii) provide to the consumer information, if applicable, regarding the consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer. (2) Not later than 6 months after the effective date of this act, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer: (i) a statement disclosing the principal reason or reasons for the consequential decision, including: (A) the degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision; (B) the type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (C) the source or sources of the data described in subsection (d)(2)(i)(B) of this section; (ii) an opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (iii) an opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer. (3) (i) except as provided in subsection (d)(3)(ii) of this section, a deployer shall provide the notice, statement, contact information, and description required by subsections (c)(1) and (d)(2) of this section: (A) directly to the consumer; (B) in plain language; (C) in all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (D) in a format that is accessible to consumers with disabilities. (ii) if the deployer is unable to provide the notice, statement, contact information, and description required by subsections (d)(1) and (d)(2) of this section directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Ch. 93M § 3(e)
Plain Language
Deployers must publish on their website a clear, readily accessible statement summarizing: (1) the types of high-risk AI systems currently deployed, (2) how they manage algorithmic discrimination risks for each system, and (3) detailed information about the nature, source, and extent of data they collect and use. The statement must be periodically updated. The small-deployer exemption under Section 3(f) applies to this obligation.
Statutory Text
(e) (1) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence systems that are currently deployed by the deployer; (ii) how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system described pursuant to subsection (e)(1)(i) of this section; and (iii) in detail, the nature, source, and extent of the information collected and used by the deployer. (2) a deployer shall periodically update the statement described in subsection (e)(1) of this section.
R-01 Incident Reporting · R-01.3 · Deployer · Automated Decisionmaking
Ch. 93M § 3(g)
Plain Language
If a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, the deployer must notify the attorney general within 90 days of discovery, in the form and manner the AG prescribes. The notice must disclose the discovery. This is a deployer-side counterpart to the developer's reporting obligation under Section 2(e).
Statutory Text
(g) if a deployer deploys a high-risk artificial intelligence system not later than 6 months after the effective date of this act, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send subsection (c)(2) to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated Decisionmaking
Ch. 93M § 3(i)
Plain Language
The attorney general may at any time request a deployer (or its contracted third party) to produce within 90 days: the risk management policy, any impact assessment, or retained records. All such materials are exempt from Massachusetts public records disclosure. Deployers may designate materials as proprietary or trade secret, and producing privileged materials does not waive attorney-client or work-product protection.
Statutory Text
(i) Not later than 6 months after the effective date of this act, the attorney general may require that a deployer, or a third party contracted by the deployer, disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subsection (b) of this section, the impact assessment completed pursuant to subsection (c) of this section, or the records maintained pursuant to subsection (c)(6) of this section. The attorney general may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and records are not subject to disclosure under the "Massachusetts Public Records Law", chapter 66, section 10 of the General Laws. In a disclosure pursuant to this subsection (i), a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records include information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
T-01 AI Identity Disclosure · T-01.1 · DeveloperDeployer · Automated Decisionmaking
Ch. 93M § 4(a)-(b)
Plain Language
Any deployer or developer that makes available a consumer-facing AI system must disclose to each interacting consumer that they are interacting with an AI system. This applies to all AI systems intended to interact with consumers — not just high-risk systems. The disclosure is not required where it would be obvious to a reasonable person that the interaction is with AI. Note the broader scope: this provision covers any 'artificial intelligence system,' not just 'high-risk artificial intelligence systems' as in Sections 2 and 3.
Statutory Text
(a) Not later than 6 months after the effective date of this act, and except as provided in subsection (b) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (b) disclosure is not required under subsection (a) of this section under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.