HB-4668
MI · State · USA
MI
USA
● Pending
Proposed Effective Date
2026-01-01
Michigan House Bill No. 4668 — Artificial Intelligence Safety and Security Transparency Act
Requires large developers of foundation models — defined by a dual compute-cost threshold ($5M per model and $100M aggregate in the preceding 12 months) — to produce, implement, and publicly publish a detailed safety and security protocol addressing critical risks including CBRN weapons, cyberattacks, and autonomous criminal conduct. Large developers must publish quarterly transparency reports with risk assessment conclusions and capability assessments, retain testing records for 5 years, and undergo annual third-party audits that must be publicly published. The act includes whistleblower protections with a private right of action for retaliated employees and requires internal anonymous reporting channels with monthly status updates. Enforcement is primarily through the Attorney General, who may seek civil fines up to $1,000,000 per violation of safety protocol and audit obligations.
Summary

Requires large developers of foundation models — defined by a dual compute-cost threshold ($5M per model and $100M aggregate in the preceding 12 months) — to produce, implement, and publicly publish a detailed safety and security protocol addressing critical risks including CBRN weapons, cyberattacks, and autonomous criminal conduct. Large developers must publish quarterly transparency reports with risk assessment conclusions and capability assessments, retain testing records for 5 years, and undergo annual third-party audits that must be publicly published. The act includes whistleblower protections with a private right of action for retaliated employees and requires internal anonymous reporting channels with monthly status updates. Enforcement is primarily through the Attorney General, who may seek civil fines up to $1,000,000 per violation of safety protocol and audit obligations.

Enforcement & Penalties
Enforcement Authority
Attorney General may bring civil actions for violations of sections 7 and 9 (safety protocol, transparency reporting, recordkeeping, and audit obligations). Attorney General may also seek injunctive relief when a large developer's activities present an imminent critical risk. Employees alleging whistleblower retaliation under section 11 have a private right of action that must be filed within 90 days of the alleged violation, in the circuit court for the county where the violation occurred, the complainant resides, or the defendant resides or has its principal place of business. The employee must show by clear and convincing evidence that they were about to make a protected report.
Penalties
For violations of sections 7 and 9, the attorney general may seek civil fines up to $1,000,000 per violation and injunctive or declaratory relief. Courts may consider severity of the violation and whether a critical risk materialized or could have materialized. For whistleblower retaliation under section 11, employees may recover injunctive relief, actual damages, reasonable attorney fees, witness fees, court costs, and any other relief the court considers appropriate including reinstatement, back wages, and full reinstatement of fringe benefits and seniority rights. Violations of the whistleblower notice and internal reporting obligations under section 11 are subject to a civil fine of up to $500 per violation, deposited into the general fund.
Who Is Covered
"Large developer" means a person that has developed both of the following: (i) A foundation model with a quantity of computing power that costs not less than $5,000,000.00 when measured using prevailing market prices of cloud computing in the United States at the time that the computing power was used. (ii) Within the immediately preceding 12 months, 1 or more foundation models with a total quantity of computing power that costs not less than $100,000,000.00 when measured using prevailing market prices of cloud computing in the United States at the time the computing power was used.
What Is Covered
"Foundation model" means an artificial intelligence model that meets all of the following requirements: (i) Is trained on a broad data set. (ii) Is designed for generality of output. (iii) Is adaptable to a wide range of distinctive tasks.
Compliance Obligations 11 obligations · click obligation ID to open requirement page
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI SystemFoundation Model
Sec. 7(1)(a)-(b)
Plain Language
Large developers must produce, implement, comply with, and publicly publish a safety and security protocol that addresses critical risks (CBRN weapons, cyberattacks, autonomous criminal conduct) as defined by section 5. If the protocol is materially modified, the changes must be conspicuously published within 30 days. This is a continuous operating obligation — the developer must both follow and publish the protocol, not merely document it.
Statutory Text
(1) Beginning on January 1, 2026, a large developer shall do all of the following: (a) Produce, implement, follow, and conspicuously publish a safety and security protocol. (b) If materially modifying the safety and security protocol under subdivision (a), conspicuously publish the modifications not more than 30 days after the material modification was made.
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI SystemFoundation Model
Sec. 5(a)-(l)
Plain Language
This section specifies the mandatory contents of the safety and security protocol. The protocol must detail: risk exclusion criteria for lower-risk models, intolerable risk thresholds and escalation procedures, testing and assessment procedures (including evasion, misuse, and model proliferation scenarios), deployment gating procedures, physical/digital/organizational security protections against unauthorized access, safeguard efficacy assessments, critical risk incident response procedures, re-assessment triggers for model modifications, incident reporting conditions, protocol modification conditions, scientific reproducibility disclosures, and the role of financially disinterested third parties. This is a content specification for the protocol required by section 7(1)(a), not an independent obligation.
Statutory Text
Sec. 5. A safety and security protocol must describe in detail all of the following, as applicable: (a) How the large developer excludes certain foundation models from being covered by the safety and security protocol when those foundation models pose a limited critical risk. (b) The thresholds at which critical risks would be considered intolerable, any justification for the thresholds, and what the large developer will do if a threshold is surpassed. (c) The testing and assessment procedures the large developer uses to investigate critical risks and how the tests and procedures account for the possibility that a foundation model could evade the control of the large developer or user or be misused, modified, executed with increased computational resources, or used to create another foundation model. (d) The procedure the large developer will use to determine if and how to deploy a foundation model when doing so poses critical risks. (e) The physical, digital, and organizational security protection the large developer will implement to prevent insiders or third parties from accessing foundation models within the large developer's control in a manner that is unauthorized by the developer and could create a critical risk. (f) Any safeguards and risk mitigation measures the large developer uses to reduce critical risks from the large developer's foundation models and how the large developer assesses efficacy and limitations. (g) How the large developer will respond if a critical risk materializes or is imminent. (h) The procedures that the large developer uses to determine whether to conduct additional assessments for a critical risk when the large developer modifies or expands access to the large developer's foundation models or combines the foundation models with other software and how such assessments are conducted. (i) The conditions under which the large developer will report an incident relevant to a critical risk that occurs in connection with 1 or more of the large developer's foundation models and the entities to which the large developer will make those reports. (j) The conditions under which the large developer will modify the large developer's safety and security protocol. (k) The parts of the safety and security protocol that the large developer believes provide sufficient scientific detail to allow for the independent assessment of the methods used to generate the results, evidence, and analysis, and to which experts any unredacted versions are made available. (l) Any other role a financially disinterested third party plays under subdivisions (a) to (k).
G-02 Public Transparency & Documentation · G-02.3 · Developer · Frontier AI SystemFoundation Model
Sec. 7(1)(c)
Plain Language
Large developers must publicly publish a transparency report at least every 90 days. Each report covers a rolling window from 120 days before publication to 30 days before publication (creating a 30-day overlap with each subsequent report). Reports must include: conclusions of all risk assessments conducted during the period, updated capability assessments for the highest-risk foundation model for each critical risk type (if changed), and — when a newly deployed or modified model poses higher critical risk than existing deployed models — the decision rationale and safeguards implemented. This creates ongoing public visibility into the developer's risk posture.
Statutory Text
(c) Not less than once every 90 days, produce and conspicuously publish a transparency report that covers the period of 120 days before the publishing of the report to 30 days before the publishing of the report that includes all of the following information: (i) The conclusion of any risk assessments made during the reporting period in accordance with the safety and security protocol under subdivision (a). (ii) If different from the preceding reporting period, for each type of critical risk, an assessment of the relevant capability of the foundation model to create that critical risk of whichever of the large developer's foundation models, whether deployed or not, would pose the highest level of that critical risk if deployed without adequate safeguards and protections. (iii) If, during the reporting period, the large developer has deployed or modified a foundation model that would pose a higher level of critical risk than any of the large developer's existing deployed foundation models if deployed without adequate safeguards and protections, both of the following: (A) The grounds on which and the process by which the large developer decided to deploy the foundation model. (B) Any safeguards and protections implemented by the large developer to mitigate critical risks.
G-01 AI Governance Program & Documentation · G-01.3G-01.4 · Developer · Frontier AI SystemFoundation Model
Sec. 7(1)(d)
Plain Language
Large developers must create and retain detailed records of all critical risk assessments — including the specific tests used and results obtained — for at least 5 years. Records must be sufficiently detailed that a qualified third party could replicate the testing. This is a contemporaneous recordkeeping obligation that supports both the annual audit requirement (section 9) and the attorney general's inspection rights.
Statutory Text
(d) Record and retain for 5 years any specific tests used and results obtained as a part of an assessment of critical risk with sufficient detail for qualified third parties to replicate the testing.
G-01 AI Governance Program & Documentation · G-01.4 · Developer · Frontier AI SystemFoundation Model
Sec. 7(3)-(4)
Plain Language
All documents published under this act must appear on a conspicuous page of the developer's website. Developers (and auditors for audit reports) may redact for trade secrets, public safety, national security, or legal compliance, but must: (1) retain the unredacted version for at least 5 years and make it available to the attorney general on request, and (2) describe the nature and justification of each redaction in the published version. This creates a dual-track system — the public gets a redacted version with explained redactions, while the attorney general can inspect unredacted originals.
Statutory Text
(3) If a large developer publishes a document in accordance with the requirements of this act, the large developer shall publish the information on a conspicuous page on the large developer's website. The large developer may redact the document as reasonably necessary to protect the large developer's trade secrets, public safety, or national security, or to comply with applicable law. An auditor required to perform an audit and produce a report under section 9 may redact information from the report using the same procedure described in this subsection before the publication of that report under section 9(3). (4) If a large developer or auditor makes a redaction under subsection (3), the large developer or auditor shall do both of the following: (a) Retain an unredacted version of the document for not less than 5 years and provide the attorney general with the ability to inspect the unredacted document on request. (b) Describe the character and justification of the redactions in the published version of the document.
S-03 Frontier Model Safety Obligations · S-03.3 · Developer · Frontier AI SystemFoundation Model
Sec. 7(2)
Plain Language
Large developers are prohibited from knowingly making false or materially misleading statements or omissions in any documents produced under the act, including the safety and security protocol, transparency reports, and testing records. This anti-fraud provision applies to all published documents and creates independent liability — a developer that publishes technically compliant documents containing knowing falsehoods violates this subsection regardless of whether the underlying safety obligations are met.
Statutory Text
(2) A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced in accordance with this section.
G-01 AI Governance Program & Documentation · G-01.5 · Developer · Frontier AI SystemFoundation Model
Sec. 9(1)-(4)
Plain Language
At least once per year, large developers must retain a reputable third-party auditor to assess: (1) whether the developer complied with its own safety and security protocol and document any noncompliance, (2) whether the protocol was stated clearly enough to determine compliance, and (3) whether the developer made false or misleading statements or violated publication/redaction requirements. The auditor must include at least one individual with corporate compliance expertise and one with technical expertise in foundation model safety. The developer must grant the auditor full access to all materials produced under the act and any other reasonably necessary materials. The audit report must be publicly published within 90 days of completion (subject to the redaction procedures in section 7(3)-(4)).
Statutory Text
(1) Beginning on January 1, 2026, not less than once per year, a large developer shall retain a reputable third-party auditor to produce a report that assesses all of the following: (a) If the large developer has complied with the large developer's safety and security protocol and any instances of noncompliance. (b) Any instance where the large developer's safety and security protocol was not stated clearly enough to determine if the large developer has complied with the safety and security protocol. (c) Any instance that the auditor believes the large developer violated section 7(2), (3), or (4). (2) A large developer shall grant the auditor access to all materials produced to comply with this act and any other materials reasonably necessary to perform the assessment under subsection (1). (3) Not more than 90 days after the completion of the auditor's report under subsection (1), a large developer shall conspicuously publish that report. (4) In conducting an audit under this section, an auditor shall employ or contract 1 or more individuals with expertise in corporate compliance and 1 or more individuals with technical expertise in the safety of foundation models.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · Developer · Frontier AI SystemFoundation Model
Sec. 11(1)
Plain Language
Large developers may not retaliate — through discharge, threats, or any discrimination in compensation, terms, conditions, location, or privileges of employment — against an employee who reports or is about to report to a federal or state authority that the developer's activities pose a critical risk. The protection extends to reports made verbally or in writing and covers actions taken by individuals on behalf of the employee. The employee definition is broad, including contractors, subcontractors, unpaid advisors involved in risk assessment, and corporate officers. The only exception is if the employee knows the report is false.
Statutory Text
(1) A large developer shall not discharge, threaten, or otherwise discriminate against an employee regarding the employee's compensation, terms, conditions, location, or privileges of employment because the employee, or an individual acting on behalf of the employee, reports or is about to report to an appropriate federal or state authority, verbally or in writing, information that indicates that the large developer's activities pose a critical risk, unless the employee knows that the report is false.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.1G-03.2G-03.4 · Developer · Frontier AI SystemFoundation Model
Sec. 11(5)-(6)
Plain Language
Large developers must: (1) post notices and use other appropriate means to keep employees informed of their whistleblower protections and obligations; (2) maintain a reasonable anonymous internal reporting channel through which employees can disclose information about critical risks in good faith; and (3) provide monthly status updates to disclosing employees on the investigation and any responsive actions. Disclosures and updates must be retained for at least 7 years and shared with non-conflicted officers and directors at least quarterly. The notice obligation is ongoing — not a one-time posting — and the internal process must be functional and accessible, not merely documented.
Statutory Text
(5) A large developer shall do both of the following: (a) Post notices and use other appropriate means to keep the large developer's employees informed of the employees' protections and obligations under this section. (b) Provide a reasonable internal process through which both of the following occur: (i) An employee may anonymously disclose information to the large developer if the employee believes in good faith that the information indicates the large developer's activities present a critical risk. (ii) A monthly update is given to the employee under subparagraph (i) regarding the status of the large developer's investigation of the disclosure and any actions taken by the large developer in response to the disclosure. (6) A large developer shall maintain the disclosures and updates provided under subsection (5)(b) for not less than 7 years after the date when the disclosure or update was created. Each disclosure and update must be shared with the officers and directors of the large developer who do not have a conflict of interest not less than once per quarter.
Other · Frontier AI SystemFoundation Model
Sec. 11(2)-(4)
Plain Language
Employees alleging whistleblower retaliation may bring a civil action within 90 days seeking injunctive relief, actual damages, attorney fees, witness fees, court costs, and any other appropriate relief including reinstatement, back wages, and fringe benefits. The employee must prove by clear and convincing evidence that they were about to make a protected report. Venue is in the circuit court for the county where the violation occurred, where the complainant resides, or where the defendant resides or has its principal place of business. This is an enforcement mechanism for the anti-retaliation obligation in section 11(1), not an independent compliance obligation.
Statutory Text
(2) An employee who alleges a violation of subsection (1) may bring a civil action not more than 90 days after the occurrence of the alleged violation seeking 1 or more of the following: (a) Injunctive relief. (b) Actual damages. (c) Reasonable attorney fees, witness fees, and court costs. (d) Any other relief the court considers appropriate, including the reinstatement of the employee, the payment of back wages, and full reinstatement of fringe benefits and seniority rights. (3) An employee who brings a civil action under subsection (2) must show by clear and convincing evidence that the employee, or an individual acting on behalf of the employee, was about to make a report protected by subsection (1). (4) A civil action commenced under subsection (2) may be brought in the circuit court for the county where the alleged violation occurred, the county where the complainant resides, or the county where the person against whom the civil complaint is filed resides or has the person's principal place of business.
Other · Frontier AI SystemFoundation Model
Sec. 13(1)-(3)
Plain Language
The Attorney General may bring civil actions against large developers for violations of sections 7 (safety protocol, transparency reporting, and recordkeeping) and 9 (annual audit), seeking fines up to $1,000,000 per violation and/or injunctive or declaratory relief. Courts consider violation severity and proximity to critical risk materialization. The Attorney General may also seek injunctive relief when a developer's activities present an imminent critical risk, even absent a specific statutory violation. This is the enforcement mechanism, not an independent compliance obligation.
Statutory Text
(1) If a large developer violates section 7 or 9, the attorney general may bring a civil action seeking 1 or both of the following: (a) A civil fine of not more than $1,000,000.00 per violation. (b) Injunctive or declaratory relief. (2) In determining the relief granted under subsection (1), the court may consider both of the following: (a) The severity of the violation. (b) If the violation resulted in, or could have resulted in, the materialization of a critical risk. (3) If a large developer's activities present an imminent critical risk, the attorney general may bring a civil action seeking injunctive relief.