HB-4668
MI · State · USA
MI
USA
● Pending
Proposed Effective Date
2026-01-01
Michigan House Bill No. 4668 — Artificial Intelligence Safety and Security Transparency Act
Requires large developers of foundation models — defined by a dual compute-cost threshold ($5M per model and $100M aggregate in the preceding 12 months) — to produce, implement, follow, and conspicuously publish a detailed safety and security protocol addressing critical risks (CBRN, cyberattack, autonomous harmful conduct causing 100+ deaths or $1B+ damages). Mandates quarterly transparency reports, five-year recordkeeping, and annual third-party audits of protocol compliance. Provides robust whistleblower protections including anonymous internal reporting channels, anti-retaliation rules, and a private right of action for employees. Enforcement is primarily by the Michigan Attorney General with civil fines up to $1M per violation of protocol and reporting requirements.
Summary

Requires large developers of foundation models — defined by a dual compute-cost threshold ($5M per model and $100M aggregate in the preceding 12 months) — to produce, implement, follow, and conspicuously publish a detailed safety and security protocol addressing critical risks (CBRN, cyberattack, autonomous harmful conduct causing 100+ deaths or $1B+ damages). Mandates quarterly transparency reports, five-year recordkeeping, and annual third-party audits of protocol compliance. Provides robust whistleblower protections including anonymous internal reporting channels, anti-retaliation rules, and a private right of action for employees. Enforcement is primarily by the Michigan Attorney General with civil fines up to $1M per violation of protocol and reporting requirements.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement for violations of safety protocol, transparency reporting, and audit obligations (Sections 7 and 9), via civil action. For whistleblower retaliation (Section 11), employees have a private right of action that must be brought within 90 days of the alleged violation, in the circuit court for the county where the violation occurred, where the complainant resides, or where the defendant resides or has its principal place of business. The employee must show by clear and convincing evidence that they were about to make a protected report.
Penalties
For violations of Sections 7 or 9: civil fine of up to $1,000,000 per violation; injunctive or declaratory relief. Court considers severity of violation and whether a critical risk materialized or could have materialized. For imminent critical risk: injunctive relief. For whistleblower retaliation (Section 11): injunctive relief, actual damages, reasonable attorney fees, witness fees, and court costs, reinstatement, back wages, full reinstatement of fringe benefits and seniority rights, and any other relief the court considers appropriate. Separate civil fine of up to $500 per violation of Section 11, deposited into the general fund.
Who Is Covered
"Large developer" means a person that has developed both of the following: (i) A foundation model with a quantity of computing power that costs not less than $5,000,000.00 when measured using prevailing market prices of cloud computing in the United States at the time that the computing power was used. (ii) Within the immediately preceding 12 months, 1 or more foundation models with a total quantity of computing power that costs not less than $100,000,000.00 when measured using prevailing market prices of cloud computing in the United States at the time the computing power was used.
What Is Covered
"Foundation model" means an artificial intelligence model that meets all of the following requirements: (i) Is trained on a broad data set. (ii) Is designed for generality of output. (iii) Is adaptable to a wide range of distinctive tasks.
Compliance Obligations 10 obligations · click obligation ID to open requirement page
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI SystemFoundation Model
Sec. 7(1)(a)-(b)
Plain Language
Large developers must produce, implement, follow, and conspicuously publish a safety and security protocol that addresses critical risks as defined by statute. If the protocol is materially modified, the modifications must be published within 30 days. The protocol must be publicly accessible on the developer's website. This is a continuing obligation — the developer must not only write and publish the protocol but actively follow it.
Statutory Text
(1) Beginning on January 1, 2026, a large developer shall do all of the following: (a) Produce, implement, follow, and conspicuously publish a safety and security protocol. (b) If materially modifying the safety and security protocol under subdivision (a), conspicuously publish the modifications not more than 30 days after the material modification was made.
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI SystemFoundation Model
Sec. 5
Plain Language
The safety and security protocol must cover twelve detailed areas: model exclusion criteria for limited-risk models, intolerable risk thresholds and responses, testing and assessment procedures (including evasion and misuse scenarios), deployment decision procedures, physical/digital/organizational security against unauthorized access, safeguard efficacy assessments, incident response procedures, procedures for reassessment upon model modification or expanded access, incident reporting conditions, protocol modification conditions, scientific reproducibility details, and the role of financially disinterested third parties. This section defines the mandatory contents of the protocol required under Section 7.
Statutory Text
Sec. 5. A safety and security protocol must describe in detail all of the following, as applicable: (a) How the large developer excludes certain foundation models from being covered by the safety and security protocol when those foundation models pose a limited critical risk. (b) The thresholds at which critical risks would be considered intolerable, any justification for the thresholds, and what the large developer will do if a threshold is surpassed. (c) The testing and assessment procedures the large developer uses to investigate critical risks and how the tests and procedures account for the possibility that a foundation model could evade the control of the large developer or user or be misused, modified, executed with increased computational resources, or used to create another foundation model. (d) The procedure the large developer will use to determine if and how to deploy a foundation model when doing so poses critical risks. (e) The physical, digital, and organizational security protection the large developer will implement to prevent insiders or third parties from accessing foundation models within the large developer's control in a manner that is unauthorized by the developer and could create a critical risk. (f) Any safeguards and risk mitigation measures the large developer uses to reduce critical risks from the large developer's foundation models and how the large developer assesses efficacy and limitations. (g) How the large developer will respond if a critical risk materializes or is imminent. (h) The procedures that the large developer uses to determine whether to conduct additional assessments for a critical risk when the large developer modifies or expands access to the large developer's foundation models or combines the foundation models with other software and how such assessments are conducted. (i) The conditions under which the large developer will report an incident relevant to a critical risk that occurs in connection with 1 or more of the large developer's foundation models and the entities to which the large developer will make those reports. (j) The conditions under which the large developer will modify the large developer's safety and security protocol. (k) The parts of the safety and security protocol that the large developer believes provide sufficient scientific detail to allow for the independent assessment of the methods used to generate the results, evidence, and analysis, and to which experts any unredacted versions are made available. (l) Any other role a financially disinterested third party plays under subdivisions (a) to (k).
G-02 Public Transparency & Documentation · G-02.3 · Developer · Frontier AI SystemFoundation Model
Sec. 7(1)(c)
Plain Language
Large developers must publish a transparency report at least every 90 days covering a rolling window from 120 days to 30 days before publication. Each report must include risk assessment conclusions, updated capability assessments for each critical risk type (if changed), and — if a new or modified model posing higher critical risk was deployed — the rationale for deployment and safeguards implemented. The 30-day lookback gap allows time for report preparation. Reports must be conspicuously published.
Statutory Text
(c) Not less than once every 90 days, produce and conspicuously publish a transparency report that covers the period of 120 days before the publishing of the report to 30 days before the publishing of the report that includes all of the following information: (i) The conclusion of any risk assessments made during the reporting period in accordance with the safety and security protocol under subdivision (a). (ii) If different from the preceding reporting period, for each type of critical risk, an assessment of the relevant capability of the foundation model to create that critical risk of whichever of the large developer's foundation models, whether deployed or not, would pose the highest level of that critical risk if deployed without adequate safeguards and protections. (iii) If, during the reporting period, the large developer has deployed or modified a foundation model that would pose a higher level of critical risk than any of the large developer's existing deployed foundation models if deployed without adequate safeguards and protections, both of the following: (A) The grounds on which and the process by which the large developer decided to deploy the foundation model. (B) Any safeguards and protections implemented by the large developer to mitigate critical risks.
G-01 AI Governance Program & Documentation · G-01.3G-01.4 · Developer · Frontier AI SystemFoundation Model
Sec. 7(1)(d), Sec. 7(3)-(4)
Plain Language
Large developers must record and retain all critical risk testing details — tests used and results obtained — for at least five years with sufficient detail for third-party replication. All documents published under the act must appear on a conspicuous page on the developer's website. Redactions are permitted for trade secrets, public safety, national security, or legal compliance, but if any redaction is made, the developer must retain the unredacted version for five years, provide the Attorney General access on request, and describe the character and justification of each redaction in the published version. The same redaction and retention rules apply to auditors publishing reports under Section 9.
Statutory Text
(d) Record and retain for 5 years any specific tests used and results obtained as a part of an assessment of critical risk with sufficient detail for qualified third parties to replicate the testing. (3) If a large developer publishes a document in accordance with the requirements of this act, the large developer shall publish the information on a conspicuous page on the large developer's website. The large developer may redact the document as reasonably necessary to protect the large developer's trade secrets, public safety, or national security, or to comply with applicable law. An auditor required to perform an audit and produce a report under section 9 may redact information from the report using the same procedure described in this subsection before the publication of that report under section 9(3). (4) If a large developer or auditor makes a redaction under subsection (3), the large developer or auditor shall do both of the following: (a) Retain an unredacted version of the document for not less than 5 years and provide the attorney general with the ability to inspect the unredacted document on request. (b) Describe the character and justification of the redactions in the published version of the document.
S-03 Frontier Model Safety Obligations · S-03.3 · Developer · Frontier AI SystemFoundation Model
Sec. 7(2)
Plain Language
Large developers are prohibited from knowingly including false or materially misleading statements or omissions in any document produced under Section 7, including the safety and security protocol and transparency reports. This is a scienter-based prohibition — it requires knowledge, not mere negligence.
Statutory Text
(2) A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced in accordance with this section.
G-01 AI Governance Program & Documentation · G-01.5 · Developer · Frontier AI SystemFoundation Model
Sec. 9(1)-(4)
Plain Language
At least once per year, large developers must hire a reputable third-party auditor to assess (1) compliance with the developer's own safety and security protocol, (2) any instances where the protocol was too vague to determine compliance, and (3) any potential violations of the truthfulness, publication, and redaction requirements in Section 7. The developer must grant the auditor access to all act-related materials and any other materials reasonably necessary. The audit team must include at least one corporate compliance expert and one technical AI safety expert. The completed report must be conspicuously published within 90 days of completion, subject to the same redaction rules as other published documents under the act.
Statutory Text
(1) Beginning on January 1, 2026, not less than once per year, a large developer shall retain a reputable third-party auditor to produce a report that assesses all of the following: (a) If the large developer has complied with the large developer's safety and security protocol and any instances of noncompliance. (b) Any instance where the large developer's safety and security protocol was not stated clearly enough to determine if the large developer has complied with the safety and security protocol. (c) Any instance that the auditor believes the large developer violated section 7(2), (3), or (4). (2) A large developer shall grant the auditor access to all materials produced to comply with this act and any other materials reasonably necessary to perform the assessment under subsection (1). (3) Not more than 90 days after the completion of the auditor's report under subsection (1), a large developer shall conspicuously publish that report. (4) In conducting an audit under this section, an auditor shall employ or contract 1 or more individuals with expertise in corporate compliance and 1 or more individuals with technical expertise in the safety of foundation models.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · Developer · Frontier AI SystemFoundation Model
Sec. 11(1)
Plain Language
Large developers are prohibited from retaliating against employees — including contractors, subcontractors, unpaid advisors involved with critical risk, and corporate officers — for reporting or being about to report to federal or state authorities that the developer's activities pose a critical risk. The protection extends to reports made verbally or in writing and covers discharge, threats, and discrimination regarding compensation, terms, conditions, location, or privileges of employment. The only exception is if the employee knows the report is false.
Statutory Text
(1) A large developer shall not discharge, threaten, or otherwise discriminate against an employee regarding the employee's compensation, terms, conditions, location, or privileges of employment because the employee, or an individual acting on behalf of the employee, reports or is about to report to an appropriate federal or state authority, verbally or in writing, information that indicates that the large developer's activities pose a critical risk, unless the employee knows that the report is false.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.1G-03.2 · Developer · Frontier AI SystemFoundation Model
Sec. 11(5)-(6)
Plain Language
Large developers must (1) post notices and use other appropriate means to inform employees of their whistleblower protections and obligations, and (2) maintain a reasonable internal anonymous disclosure process for employees who believe the developer's activities present a critical risk. The process must provide monthly status updates to disclosing employees on the investigation and any responsive actions. All disclosures and updates must be retained for at least seven years and shared with non-conflicted officers and directors at least quarterly. The quarterly board-level sharing requirement ensures that critical risk disclosures are escalated to leadership.
Statutory Text
(5) A large developer shall do both of the following: (a) Post notices and use other appropriate means to keep the large developer's employees informed of the employees' protections and obligations under this section. (b) Provide a reasonable internal process through which both of the following occur: (i) An employee may anonymously disclose information to the large developer if the employee believes in good faith that the information indicates the large developer's activities present a critical risk. (ii) A monthly update is given to the employee under subparagraph (i) regarding the status of the large developer's investigation of the disclosure and any actions taken by the large developer in response to the disclosure. (6) A large developer shall maintain the disclosures and updates provided under subsection (5)(b) for not less than 7 years after the date when the disclosure or update was created. Each disclosure and update must be shared with the officers and directors of the large developer who do not have a conflict of interest not less than once per quarter.
Other · Frontier AI SystemFoundation Model
Sec. 11(2)-(4)
Plain Language
Employees who experience retaliation for whistleblowing may bring a civil action within 90 days of the violation in the appropriate circuit court. Available remedies include injunctive relief, actual damages, attorney and witness fees, court costs, reinstatement, back wages, and full restoration of fringe benefits and seniority rights. The employee must prove by clear and convincing evidence that they were about to make a protected report. This is the enforcement mechanism for the anti-retaliation obligation in Section 11(1), not a separate compliance obligation.
Statutory Text
(2) An employee who alleges a violation of subsection (1) may bring a civil action not more than 90 days after the occurrence of the alleged violation seeking 1 or more of the following: (a) Injunctive relief. (b) Actual damages. (c) Reasonable attorney fees, witness fees, and court costs. (d) Any other relief the court considers appropriate, including the reinstatement of the employee, the payment of back wages, and full reinstatement of fringe benefits and seniority rights. (3) An employee who brings a civil action under subsection (2) must show by clear and convincing evidence that the employee, or an individual acting on behalf of the employee, was about to make a report protected by subsection (1). (4) A civil action commenced under subsection (2) may be brought in the circuit court for the county where the alleged violation occurred, the county where the complainant resides, or the county where the person against whom the civil complaint is filed resides or has the person's principal place of business.
Other · Frontier AI SystemFoundation Model
Sec. 13(1)-(3)
Plain Language
The Attorney General may bring a civil action for violations of the safety protocol and transparency reporting requirements (Section 7) and audit requirements (Section 9), seeking fines up to $1,000,000 per violation and/or injunctive or declaratory relief. Courts consider violation severity and whether a critical risk materialized or could have. Separately, if a large developer's activities present an imminent critical risk, the AG may seek injunctive relief regardless of whether a specific section violation has occurred. This is the enforcement mechanism, not a standalone compliance obligation.
Statutory Text
(1) If a large developer violates section 7 or 9, the attorney general may bring a civil action seeking 1 or both of the following: (a) A civil fine of not more than $1,000,000.00 per violation. (b) Injunctive or declaratory relief. (2) In determining the relief granted under subsection (1), the court may consider both of the following: (a) The severity of the violation. (b) If the violation resulted in, or could have resulted in, the materialization of a critical risk. (3) If a large developer's activities present an imminent critical risk, the attorney general may bring a civil action seeking injunctive relief.