ID
Requirement
Enacted
Proposed
Transparency & Disclosure 3 requirements
T-01
AI Identity Disclosure Users must be informed when they are interacting with an AI system rather than a human.
DeveloperDeployerProfessionalGovernment Chatbot
8
81
Sub-ID
Name & Description
Enacted
Proposed
T-01.1
Initial disclosureClear and conspicuous notice must be provided before or at the start of an interaction. Some jurisdictions impose this unconditionally; others only when a reasonable person could be misled into believing they are speaking with a human.
8 enacted
81 proposed
T-01.2
Periodic re-disclosureFor extended conversational sessions, the system must periodically remind users they are interacting with AI. Interval requirements vary by jurisdiction and minor status.
3 enacted
42 proposed
T-01.3
On-demand disclosureThe system must accurately identify itself as AI when a user asks.
2 enacted
20 proposed
T-02
AI Content Labeling & Provenance AI-generated content must be identifiable through visible labels, embedded provenance signals, platform-level detection, or detection tools.
DeveloperDeployerDistributorManufacturerGovernment Foundation ModelSocial MediaCommunicationsSearchRecording DevicePolitical AdvertisingModel Hosting
1
15
Sub-ID
Name & Description
Enacted
Proposed
T-02.1
Visible or audible labelAI-generated content must carry a human-perceptible label — a watermark, caption, audio tag, or other conspicuous indicator — identifying it as AI-generated. Political content triggers stricter requirements in most jurisdictions imposing this obligation.
0 enacted
13 proposed
T-02.2
Embedded provenance metadataAI-generated content must carry embedded machine-readable provenance signals at the point of generation, enabling downstream detection even if visible labels are removed. Signals must be durable and survive common transformations such as compression and format conversion.
1 enacted
6 proposed
T-02.3
Provenance standard complianceProvenance signals must conform to an interoperable standard enabling third-party verification (e.g., C2PA Content Credentials), rather than a proprietary system that only the developer can verify.
1 enacted
2 proposed
T-02.4
Platform provenance detection dutyLarge online platforms must scan content they distribute to detect whether standards-compliant provenance data is embedded in or attached to it.
1 enacted
1 proposed
T-02.5
Platform user disclosure dutyLarge online platforms must provide a user-facing interface that clearly discloses when content carries provenance data indicating AI origin, including the name of the generating system and whether digital signatures are available.
1 enacted
1 proposed
T-02.6
Platform preservation dutyLarge online platforms must not knowingly strip standards-compliant provenance data or digital signatures from content uploaded or distributed on the platform, to the extent technically feasible.
1 enacted
1 proposed
T-02.7
Detection tool availabilityDevelopers of large-scale AI content generation systems must offer a publicly accessible tool or API that accepts content as input and returns a determination of whether the content was AI-generated by that developer's systems.
0 enacted
1 proposed
T-03
Training Data Disclosure Developers must disclose information about the data used to train AI models, either publicly or to regulatory authorities.
DeveloperDeployerGovernment Foundation Model
2
9
Sub-ID
Name & Description
Enacted
Proposed
T-03.1
Regulator disclosureDevelopers must provide training data documentation to designated regulatory authorities. This disclosure may be treated as confidential and is not required to be made public.
0 enacted
2 proposed
T-03.2
Public disclosureDevelopers must post training data documentation publicly on their website before making a system available and before each new release or substantial modification. Substantial modification includes retraining and fine-tuning.
1 enacted
3 proposed
T-03.3
Training Data Governance Disclosure to DeployersDevelopers must disclose to deployers the data governance measures applied to training datasets, including examination of data source suitability, possible biases, and mitigation steps taken, as part of pre-deployment technical documentation.
1 enacted
4 proposed
Human Oversight & Fairness 2 requirements
H-01
Human Oversight of Automated Decisions When AI makes consequential decisions about individuals, those individuals must be able to understand, review, challenge, and in some contexts override those decisions.
DeveloperDeployerProfessionalGovernment EmploymentFinancial ServicesHealthcareGovernment System
1
54
Sub-ID
Name & Description
Enacted
Proposed
H-01.1
Explanation rightThe individual must receive an explanation of the principal factors that drove the automated decision, in plain language specific enough to be actionable — not a generic statement that AI was used.
1 enacted
31 proposed
H-01.2
Data disclosure rightThe specific data inputs used in making the decision about this individual must be disclosed, including the right to know what data was used and to correct inaccurate data.
0 enacted
16 proposed
H-01.3
Pre-decision noticeThe individual must be notified before a consequential automated decision is made — informing them that an automated system will be used and what categories of decisions it can make.
1 enacted
37 proposed
H-01.4
Right to request human reviewThe individual must have a clear, accessible mechanism to request human review of an automated decision. The right must be disclosed at or near the time of the decision. Human review must be available but the individual must invoke it.
1 enacted
27 proposed
H-01.5
Appeal and contestation rightA defined process must exist for the individual to formally contest an automated decision and receive a substantive response explaining the outcome. The process must be accessible without unreasonable burden.
1 enacted
22 proposed
H-01.6
Mandatory pre-action human sign-offBefore action is taken on an AI recommendation in defined high-stakes contexts, a qualified human reviewer must affirmatively review and authorize the decision. The human must have authority and practical ability to override — not merely ratify — the AI output.
0 enacted
22 proposed
H-02
Non-Discrimination & Bias Assessment AI systems in high-stakes contexts must be evaluated for discriminatory impact, results documented, and in some cases independently audited and publicly disclosed.
DeveloperDeployerGovernment EmploymentFinancial ServicesHealthcareGovernment System
2
51
Sub-ID
Name & Description
Enacted
Proposed
H-02.1
Internal bias testingThe developer or deployer must conduct testing across protected characteristics using appropriate statistical methods before deployment.
1 enacted
26 proposed
H-02.2
Documented methodologyThe testing methodology must be documented in sufficient detail for third-party review, including: protected characteristics tested, statistical measures used, datasets tested, and results.
0 enacted
9 proposed
H-02.3
Algorithmic impact assessmentA formal written assessment of the AI system's potential discriminatory impact must be completed before deployment, identifying risks and mitigation measures. Must be retained and available to regulators on request.
2 enacted
27 proposed
H-02.4
Regulator submission of assessmentProactive submission of the impact assessment to a regulatory authority on a defined schedule or upon request.
0 enacted
5 proposed
H-02.5
Public disclosure of assessmentPublic disclosure of a summary or the full impact assessment.
0 enacted
5 proposed
H-02.6
Independent third-party auditA qualified independent auditor with no material relationship to the developer or deployer must evaluate the system for bias and disparate impact. Currently required primarily for automated employment decision tools.
0 enacted
13 proposed
H-02.7
Public disclosure of audit resultsAudit results, including selection rates and impact ratios across protected categories, must be published prior to or contemporaneous with deployment.
0 enacted
9 proposed
H-02.8
Periodic Post-Deployment Discrimination ReviewDeployers must conduct periodic (at least annual) reviews of each deployed high-risk AI system to affirmatively verify the system is not causing algorithmic discrimination, separate from pre-deployment bias assessments. Reviews may be conducted internally or by a contracted third party.
2 enacted
14 proposed
H-02.10
Impact Assessment Records RetentionDeployers must retain all impact assessments, associated records, and prior impact assessments for a period of time following the final deployment of each high-risk AI system, and make them available to regulators upon request.
1 enacted
12 proposed
Safety & Prohibited Conduct 3 requirements
S-01
AI System Safety Program Operators of high-risk AI must evaluate for risks before deployment, test adversarially, and maintain documented safety controls on an ongoing basis.
DeveloperDeployerProfessionalGovernment Foundation ModelHealthcareGovernment System
2
25
Sub-ID
Name & Description
Enacted
Proposed
S-01.1
Internal pre-deployment safety evaluationA documented safety evaluation covering the system's behavior across intended use cases and reasonably foreseeable misuse cases must be conducted and retained before deployment. Must identify failure modes and the harms they could cause.
1 enacted
13 proposed
S-01.2
Red-teaming and adversarial testingStructured adversarial testing must be conducted to identify the system's potential for misuse, harmful output elicitation, jailbreaking, and dangerous capability expression. Covers both internal and, for frontier models, independent external red-teaming.
0 enacted
0 proposed
S-01.3
Third-party safety evaluationFor frontier or high-capability models, independent external safety evaluation by a qualified third party is required or strongly expected. The third party must have meaningful model access and freedom to probe without restriction.
0 enacted
1 proposed
S-01.4
Post-deployment monitoring and re-evaluationDeployed AI systems must be monitored for drift, unexpected behavior, and safety incidents. Material model updates and safety incidents trigger re-evaluation obligations.
0 enacted
14 proposed
S-01.5
Ongoing risk management programA formal documented AI risk management program must be established and maintained, covering risk identification, assessment criteria, mitigation strategies, and escalation procedures. The NIST AI RMF is commonly cited as a safe harbor framework.
2 enacted
11 proposed
S-01.7
Continuous Post-Deployment Quality AssuranceDeployed AI tools must be subject to periodic performance review and revision to maximize accuracy, reliability, and safety on an ongoing operational basis, distinct from pre-deployment testing or incident response.
0 enacted
12 proposed
S-02
Prohibited Conduct & Output Restrictions Certain AI conduct is categorically prohibited. Other output categories must be restricted or subject to active safety protocols based on deployment context and user population.
DeveloperDeployerGovernment ChatbotMinorsGeneral Consumer AppGovernment System
1
55
Sub-ID
Name & Description
Enacted
Proposed
S-02.1
Social scoring prohibitionAI systems used by or on behalf of governments or employers to assign aggregate scores to individuals based on behavior, social relationships, or perceived trustworthiness — where scores affect access to opportunities or services — are prohibited.
0 enacted
3 proposed
S-02.2
Real-time biometric surveillance restrictionAI-enabled real-time identification of individuals in publicly accessible spaces using biometric data is prohibited or requires express regulatory authorization. Narrow exceptions exist for defined law enforcement purposes subject to judicial authorization.
0 enacted
7 proposed
S-02.4
CSAM output prohibitionAI systems may not generate child sexual abuse material under any circumstances. This prohibition applies universally regardless of deployment context.
0 enacted
2 proposed
S-02.5
AI-generated NCII prohibitionDevelopers and operators of AI image and video generation tools may not knowingly generate, distribute, or facilitate distribution of non-consensual intimate imagery of real, identifiable individuals.
0 enacted
0 proposed
S-02.6
Sexually explicit content restriction for minorsAI systems accessible to users known to be minors must implement reasonable measures to prevent production of visual material of sexually explicit conduct or direct solicitation of minors to engage in sexually explicit conduct.
1 enacted
16 proposed
S-02.7
Self-harm and suicidal ideation content restrictionAI systems must restrict outputs that produce, promote, or facilitate suicidal ideation, suicide, or self-harm content.
1 enacted
24 proposed
S-02.9
Crisis protocol publicationOperators must publicly post the details of their crisis response protocol on their website. This is a standalone disclosure obligation separate from maintaining the protocol itself.
1 enacted
6 proposed
S-02.10
Product safety warningOperators must disclose known safety risks or suitability limitations of their AI product to users at or before the point of access — on the application, browser, or any other access format. Must not be buried in terms of service.
1 enacted
5 proposed
S-03
Frontier Model Safety Obligations Large frontier model developers face specific obligations around catastrophic risk assessment, dual-use evaluation, deployment thresholds, and compute reporting.
DeveloperDeployer Foundation Model
2
6
Sub-ID
Name & Description
Enacted
Proposed
S-03.1
Catastrophic risk assessment and mitigationFrontier model developers must assess and document the risk that their models could cause catastrophic harm — such as mass casualties, critical infrastructure attacks, or other existential-scale outcomes — and implement appropriate safeguards to prevent unreasonable risk of such harm.
0 enacted
1 proposed
S-03.2
CBRN and critical infrastructure risk evaluationDevelopers must evaluate whether the model provides meaningful uplift to individuals seeking to develop chemical, biological, radiological, or nuclear weapons, or to plan attacks on critical infrastructure. Must be documented and updated as capabilities change.
1 enacted
0 proposed
S-03.3
Risk-threshold deployment prohibitionA developer may not deploy a frontier model if doing so would create an unreasonable risk of critical harm. Critical harm is defined in most statutes as CBRN weapon creation or mass-casualty autonomous AI conduct causing death or serious injury to 100+ people or $1B+ in damages.
1 enacted
5 proposed
S-03.4
Compute and capability reportingDevelopers of models trained above defined compute thresholds must report model characteristics — including training compute, architecture, capabilities, and safety evaluation results — to designated regulatory authorities.
0 enacted
0 proposed
S-03.5
Frontier AI safety framework publicationLarge frontier model developers must write, implement, comply with, and publicly publish a frontier AI safety framework detailing how the developer handles catastrophic risk assessment and thresholds, safety oversight, third-party evaluation processes, cybersecurity protections, and whistleblower procedures. The framework must be kept current and updated following material changes to the developer's systems or risk profile.
2 enacted
6 proposed
Governance & Documentation 3 requirements
G-01
AI Governance Program & Documentation Organizations must establish a documented AI governance program, maintain records sufficient for regulatory review, and designate accountability for AI compliance.
DeveloperDeployerProfessionalGovernment
4
59
Sub-ID
Name & Description
Enacted
Proposed
G-01.1
Risk management program establishmentA formal AI risk management program must be established, documented, and approved by appropriate organizational leadership. Must cover risk identification, assessment criteria, mitigation strategies, and escalation procedures. NIST AI RMF is commonly cited as a safe harbor framework.
2 enacted
28 proposed
G-01.2
Ongoing program maintenance and updateThe program must be reviewed and updated periodically — typically annually — and following material changes to AI systems in scope or to the regulatory environment.
3 enacted
18 proposed
G-01.3
Record keeping and audit trailDocumentation of AI system design decisions, training data characteristics, bias testing results, safety evaluation results, and deployment parameters must be created contemporaneously and retained for defined periods — typically 2–5 years depending on jurisdiction.
3 enacted
32 proposed
G-01.4
Regulatory production of recordsRecords must be organized and maintained in a form that can be produced to regulatory authorities upon request within a reasonable timeframe.
1 enacted
17 proposed
G-01.5
Third-party audit and certificationHigh-risk AI systems must be submitted to a qualified independent auditor for evaluation, and results disclosed to regulators or publicly.
0 enacted
9 proposed
G-01.6
Designated AI accountability roleA specific individual or office must be formally designated as responsible for AI governance, with defined responsibilities, authority, and resources. SPublic disclosure of the designated role may be required.
1 enacted
4 proposed
G-02
Public Transparency & Documentation Developers must publish documentation about their AI systems for public consumption or downstream deployers, covering capabilities, limitations, safety measures, and risk assessments.
DeveloperDeployerGovernment Foundation Model
2
28
Sub-ID
Name & Description
Enacted
Proposed
G-02.1
Model card or system card publicationA structured document covering model capabilities, training data characteristics, evaluation results, intended uses, known limitations, and out-of-scope uses must be published and kept current. Must be accessible to downstream deployers and researchers.
1 enacted
11 proposed
G-02.3
Catastrophic risk assessment summary publicationLarge frontier developers must publicly publish a summary of their catastrophic risk assessments.
1 enacted
2 proposed
G-02.4
Public AI Use Case InventoryDevelopers and deployers of high-risk AI systems must publish and maintain on their public website or in a public use case inventory a clear summary describing the high-risk AI systems they offer or deploy, including intended uses, known discrimination risks, and risk management approaches.
1 enacted
25 proposed
G-03
Whistleblower & Anti-Retaliation Protections AI governance statutes require organizations to implement internal safety reporting mechanisms and prohibit retaliation against employees who make good-faith safety disclosures.
DeveloperDeployerGovernment Foundation Model
1
22
Sub-ID
Name & Description
Enacted
Proposed
G-03.1
Internal anonymous reporting channelThe organization must provide a reasonable internal process through which covered employees may anonymously disclose information indicating a specific and substantial danger to public health or safety or a violation of applicable AI law. Must include a mechanism for submitting disclosures without revealing identity. For large frontier developers, the process must include mandatory status updates to the disclosing employee at least monthly, board-level escalation of unresolved disclosures, and protections ensuring the channel cannot be used to identify the disclosing employee.
1 enacted
5 proposed
G-03.2
Officer and director escalationDisclosures and responses through the internal reporting process must be shared with officers and directors on a regular cadence, except where the disclosure alleges wrongdoing by that officer or director.
1 enacted
2 proposed
G-03.3
Anti-retaliation prohibition and policyThe organization must not retaliate against employees for making good-faith disclosures and must implement policies and contracts consistent with this prohibition. Employment contracts and NDAs may not prohibit protected disclosures.
1 enacted
22 proposed
G-03.4
Whistleblower Rights Notice DistributionDevelopers must post or annually distribute written notice to all covered employees of their whistleblower rights, with specific accommodation for remote workers and new employee onboarding.
1 enacted
5 proposed
Data Governance 1 requirement
D-01
Automated Processing Rights & Data Controls Individuals have specific rights regarding personal data used in automated decision-making, and organizations face restrictions on use of sensitive attributes in AI decisions.
DeveloperDeployerManufacturerProfessionalGovernment EmploymentFinancial ServicesHealthcare
0
65
Sub-ID
Name & Description
Enacted
Proposed
D-01.1
Right to knowIndividuals have the right to know that their personal data is being used in an automated decision-making system, and in some jurisdictions to receive a description of the categories of data used.
0 enacted
19 proposed
D-01.2
Right to correctIndividuals have the right to correct inaccurate personal data used in automated decisions, and to have the correction reflected in pending and future decisions — not just in the underlying record.
0 enacted
13 proposed
D-01.3
Right to opt outIndividuals have the right to opt out of automated processing of their personal data for consequential decisions.
0 enacted
13 proposed
D-01.4
Data minimizationData collected and generated in connection with AI systems — including behavioral data, inferences, and derived attributes — must be limited to what is necessary for the AI system's stated purpose. Secondary uses require separate justification.
0 enacted
49 proposed
D-01.5
Sensitive attribute restrictionsAI systems may not use sensitive personal attributes (race, gender, religion, health status, sexual orientation, national origin, disability) as direct inputs to consequential automated decisions except where expressly permitted. Proxy variable restrictions also apply — systems may not be designed to infer sensitive attributes from non-sensitive proxies for use in consequential decisions.
0 enacted
14 proposed
D-01.6
Age-Differentiated Parental Control and Privacy ToolsOperators must provide minor-specific and under-thirteen parental or guardian tools for managing privacy and account settings, including control over interaction data retention for personalization, use of personal data for AI training, and account deletion. Age assurance data must be minimized and immediately deleted upon determination.
0 enacted
3 proposed
D-01.8
Biometric Data Pre-Collection ConsentEntities must provide written notice and obtain affirmative opt-in consent from individuals before collecting any biometric identifier, including specific notice of identifier type and collection purpose. Consent obtained from publicly available sources is insufficient unless the individual themselves made the data publicly available.
0 enacted
16 proposed
Consumer Protection 2 requirements
CP-01
Deceptive & Manipulative AI Conduct AI must not deceive or manipulate users — whether through impersonation, dark patterns, false personalization, or fabricated political content.
DeveloperDeployerProfessional ChatbotPolitical AdvertisingGeneral Consumer App
4
49
Sub-ID
Name & Description
Enacted
Proposed
CP-01.1
Psychological vulnerability exploitation prohibitionAI systems may not be designed to identify and exploit individual psychological vulnerabilities — including grief, loneliness, anxiety, or addiction susceptibility — or to exploit cognitive biases and subconscious processing to influence behavior in ways users would not endorse if they understood the mechanism. This prohibition applies regardless of whether the manipulation is intended to extract commercial value, influence decisions, or modify behavior.
0 enacted
11 proposed
CP-01.2
Compulsive engagement design prohibitionAI systems may not be designed to create compulsive or addictive engagement patterns users cannot reasonably moderate — including variable reward schedules, manufactured urgency, and engagement optimization that prioritizes platform metrics over user wellbeing.
0 enacted
11 proposed
CP-01.3
Deceptive dark patterns prohibitionAI systems may not use deceptive interface patterns — including misleading defaults, hidden opt-outs, manufactured social proof, or confusing choices — to obtain consent or influence decisions.
1 enacted
6 proposed
CP-01.4
Simulated emotional attachment prohibitionAI systems may not be designed to simulate genuine emotional relationships for the purpose of manipulating decisions or extracting value, where the system knows the emotional response is not warranted.
0 enacted
6 proposed
CP-01.5
Deceptive personalization prohibitionAI systems may not use personal data to generate false impressions of personal connection, personal endorsement, or personal relationship that does not exist. Fabricated reviews, testimonials, and social proof are also prohibited.
0 enacted
8 proposed
CP-01.6
AI in political content — disclosure requirementAI-generated political advertising and communications must be labeled as AI-generated. Disclosure requirements vary by jurisdiction in label language, prominence, definition of political content, and timing windows relative to elections.
1 enacted
2 proposed
CP-01.7
AI in political content — fabricated candidate content prohibitionAI-generated content that depicts a candidate saying or doing something they did not say or do is prohibited within a defined election window (typically 60–90 days). This is a prohibition — the content cannot be published even with a disclosure label.
1 enacted
1 proposed
CP-01.9
AI Professional Credential Misrepresentation ProhibitionAI systems and their operators must not use any term, interface design, or output language that indicates or implies AI output is provided by, endorsed by, or equivalent to services from a licensed healthcare, legal, accounting, financial, or other certified professional.
1 enacted
24 proposed
CP-01.10
Protected-Class Pricing ProhibitionNo person may use protected-class data (e.g., race, ethnicity, sex, age, disability) as inputs to algorithmic pricing models where such use results in discriminatory price differentiation based on protected characteristics.
0 enacted
0 proposed
CP-02
Non-Consensual Intimate Imagery Generating, distributing, or facilitating the distribution of non-consensual intimate imagery using AI tools is prohibited and gives rise to civil and criminal liability.
DeveloperDeployerDistributor General Consumer AppSocial Media
1
6
Sub-ID
Name & Description
Enacted
Proposed
CP-02.1
Generation prohibitionDevelopers and operators of AI image and video generation tools may not knowingly generate non-consensual intimate imagery of real, identifiable individuals.
0 enacted
0 proposed
CP-02.2
Distribution prohibitionPlatforms may not knowingly distribute AI-generated NCII and may face liability for failure to remove upon notice.
0 enacted
0 proposed
CP-02.3
Platform takedown obligationPlatforms must provide a reasonably accessible mechanism for individuals to report NCII and must take down confirmed NCII upon notice. Failure to respond timely may create independent liability.
0 enacted
1 proposed
CP-02.4
Generative AI Likeness Consent RequirementNo person or entity may commercially publish, display, or use an individual's name, portrait, voice, or likeness created through generative AI without express consent from the individual or authorized representative, including post-mortem rights where applicable. AI technology providers enabling creation of digital replicas must display mandated consumer warnings about civil and criminal liability for unauthorized use.
1 enacted
6 proposed
Public Sector AI 1 requirement
PS-01
Government AI Accountability Government agencies using AI must inventory systems, assess impacts, meet procurement standards, and disclose AI use to affected individuals.
DeveloperGovernment Government System
1
6
Sub-ID
Name & Description
Enacted
Proposed
PS-01.1
AI system inventory and registryGovernment agencies must maintain and annually publish an inventory or registry of AI systems in use, including each system's name, vendor, capability description, purpose, decision-making role, the categories of decisions it informs, the populations affected, and whether a pre-implementation impact assessment was performed. Inventory must be published in an open, machine-readable data format on a publicly accessible government website.
1 enacted
0 proposed
PS-01.2
Algorithmic impact assessment before deploymentBefore deploying an AI system in a consequential public-facing role, the agency must conduct and publish a formal impact assessment covering system purpose, affected populations, discriminatory impact analysis, mitigation measures, and oversight mechanisms.
1 enacted
3 proposed
PS-01.3
Public disclosure of registry and assessmentsRegistry entries and impact assessments must be publicly accessible, enabling citizens, journalists, and researchers to understand what AI systems government agencies use and for what purposes.
1 enacted
0 proposed
PS-01.4
Procurement standards complianceAI systems intended for government procurement must meet defined performance, safety, transparency, and documentation standards. Vendors must be able to produce documentation demonstrating compliance as part of the procurement process.
1 enacted
5 proposed
Reporting & Regulatory Submissions 3 requirements
R-01
Incident Reporting Significant AI failures, safety incidents, or critical harms must be reported to designated regulatory authorities within specified timeframes.
DeveloperDeployerProfessional Foundation ModelHealthcare
4
18
Sub-ID
Name & Description
Enacted
Proposed
R-01.1
Regulator notification of safety incidentsSafety incidents must be reported to designated regulatory authorities within specified timeframes. The definition of 'safety incident' varies by jurisdiction — many focus on incidents involving high-risk AI in high-stakes contexts. For incidents posing imminent risk of death or serious physical injury, accelerated reporting timelines apply — typically within 24 hours to an appropriate authority, including law enforcement or public safety authorities where the incident involves criminal activity or immediate physical risk.
3 enacted
10 proposed
R-01.2
Individual notificationNotification to individuals who were harmed or at risk of harm from a safety incident, analogous to data breach notification.
0 enacted
2 proposed
R-01.3
Algorithmic Discrimination Discovery ReportingDeployers that discover a deployed high-risk AI system has caused algorithmic discrimination must notify the attorney general within 90 days of discovery. Developers must also notify all known deployers and the attorney general within 90 days upon discovering discrimination risks.
1 enacted
7 proposed
R-02
Regulatory Disclosure & Submissions Developers or deployers must submit documentation about AI systems to regulatory authorities, either on a defined schedule or on demand.
DeveloperDeployerGovernment Foundation ModelGovernment System
2
49
Sub-ID
Name & Description
Enacted
Proposed
R-02.1
Scheduled proactive submissionDocumentation must be submitted to regulators on a defined schedule — for example, annually or upon deployment of a new system or material modification — covering risk assessments, impact assessments, and safety evaluation results as required by applicable law.
2 enacted
35 proposed
R-02.2
On-demand production upon regulatory requestRegulators may request documentation about AI systems at any time, and the organization must produce it within a defined timeframe — typically 90 days, though shorter windows may apply for urgent safety matters. Required documentation includes risk management policies, impact assessments, model cards, dataset cards, and related records. Trade secret protections apply; organizations may designate materials as confidential subject to applicable state law. Requires maintaining documentation in a form that can be rapidly assembled and produced.
1 enacted
23 proposed
R-02.3
Market authorization or registry submissionRegistration of AI systems in a regulatory database before or at deployment.
0 enacted
4 proposed
R-02.4
Annual AI Compliance Self-CertificationRegulated entities must annually certify to the applicable sector-specific regulator that their AI systems meet enumerated performance, fairness, non-discrimination, accuracy, and reliability standards on a continuing basis.
0 enacted
4 proposed
R-03
Operational Performance Reporting Operators must submit periodic reports to designated authorities on AI system performance in production, including quantitative operational metrics.
DeployerGovernment Chatbot
2
8
Sub-ID
Name & Description
Enacted
Proposed
R-03.1
Periodic quantitative metrics reportingOperators must report defined operational metrics to a designated authority on a prescribed schedule. Metrics may include crisis referral notification counts, safety protocol activation counts, or other jurisdiction-specified data. Reports must not include personal information about users. Where reporting covers safety or mental health metrics, operators must use evidence-based methods for measuring the relevant conditions and disclose the methodology used.
2 enacted
8 proposed
R-03.2
Protocol and process reportingOperators may be required to report on the protocols and processes in place to address defined risk categories, including updates to those protocols since the prior reporting period.
1 enacted
5 proposed
Healthcare AI 2 requirements
HC-01
Healthcare AI Decision Restrictions Restricts and regulates the use of AI in healthcare coverage determinations, utilization review, and clinical decision-making.
DeployerProfessionalGovernment HealthcareInsurance
0
34
Sub-ID
Name & Description
Enacted
Proposed
HC-01.1
Prohibition on AI as Sole Decision-MakerAI, algorithms, or software tools may not serve as the sole or primary basis for denying, delaying, modifying, or downcoding healthcare coverage, claims, or prior authorization requests. A licensed human clinical professional must make or independently affirm every adverse determination.
0 enacted
30 proposed
HC-01.2
Licensed Clinical Peer Review RequirementAny denial, delay, modification, or downgrade of healthcare services based on medical necessity must be reviewed and decided by a qualified clinical peer — a licensed physician or healthcare professional practicing in the same or similar specialty as the treating provider — who considers the provider's recommendation and the enrollee's individual medical history.
0 enacted
27 proposed
HC-01.3
Individualized Clinical Data BasisAI tools used in utilization review or coverage determinations must base their outputs on individualized enrollee clinical data (medical history, clinical records, individual circumstances) and must not base determinations solely on aggregate or group-level datasets.
0 enacted
20 proposed
HC-01.4
Periodic AI Tool Review and RevisionHealth insurers and utilization review organizations must periodically review and revise AI tools used in coverage and clinical determinations to maximize accuracy, reliability, fairness, and compliance with applicable clinical standards.
0 enacted
16 proposed
HC-01.5
Patient Data Purpose LimitationPatient data used by AI in utilization review or coverage determination functions must not be used beyond its intended and stated purpose, consistent with HIPAA and applicable state health privacy law.
0 enacted
14 proposed
HC-01.6
Healthcare AI Disclosure to Enrollees and ProvidersInsurers must provide written disclosure to enrolled patients, contracted providers, and where applicable group plan sponsors, that AI or algorithms are used in utilization management or coverage determinations. Each claim denial communication must identify whether AI was involved and the named human professional who made the final determination.
0 enacted
12 proposed
HC-01.7
Healthcare AI Regulatory Filing and Audit AccessInsurers must file AI-related utilization review policies and procedures with the applicable state insurance regulator, make such policies available to enrollees and providers upon request, and ensure that AI tools used in utilization review are open to inspection for regulatory audit or compliance review.
0 enacted
19 proposed
HC-01.8
AI Denial Attestation in CommunicationsInsurers must include in each claim denial communication a statement affirming whether AI, machine learning, or an automated system served as the basis for the denial decision, and must identify the qualified human professional responsible.
0 enacted
4 proposed
HC-02
AI in Licensed Professional Practice Restrictions Restricts the use of AI within licensed professional practice contexts, particularly mental and behavioral health care.
DeveloperDeployerProfessionalGovernment HealthcareMental HealthProfessional ServicesChatbot
0
22
Sub-ID
Name & Description
Enacted
Proposed
HC-02.1
Professional Responsibility for AI OutputsLicensed professionals must maintain full responsibility for all interactions, outputs, and data use associated with any AI system they use in delivering professional services. AI outputs used in clinical contexts — including therapeutic recommendations, treatment plans, and medical necessity determinations — must be reviewed and approved by the responsible licensed professional before being acted upon. The reviewing professional must hold credentials in the same or similar specialty as the subject matter of the determination.
0 enacted
16 proposed
HC-02.2
Prohibited AI Functions in Licensed PracticeAI systems must not independently make therapeutic decisions, directly interact with clients in therapeutic communication, generate treatment plans without licensed professional review, or detect or infer emotions or mental states in clinical or consumer-facing professional contexts.
0 enacted
18 proposed
HC-02.3
Unlicensed AI Therapy ProhibitionNo person or entity may offer, advertise, or provide therapy, psychotherapy, or other licensed professional services through AI systems unless those services are conducted by a state-licensed, registered, or certified professional.
0 enacted
17 proposed
HC-02.4
AI Session Recording ConsentBefore using AI to record or transcribe a therapeutic or counseling session, the licensed professional must inform the patient in writing of the AI's use and specific purpose and obtain written, informed consent that is revocable at any time. Consent must be obtained at least 24 hours in advance where required by applicable law. Services may not be denied based on refusal to consent.
0 enacted
15 proposed
HC-02.5
AI Professional Representation ProhibitionOperators and providers are prohibited from using any term, letter, phrase, or interface design in advertising, outputs, or system features that indicates or implies AI output is provided by, endorsed by, or equivalent to services from a licensed healthcare, mental health, legal, accounting, or financial professional.
0 enacted
2 proposed
Minor Protection 2 requirements
MN-01
Minor User AI Safety Protections Imposes age verification, parental controls, engagement restrictions, and content safeguards for AI systems accessible to minors.
DeveloperDeployer Consumer TechnologySocial MediaEducationChatbot
0
34
Sub-ID
Name & Description
Enacted
Proposed
MN-01.1
Age Verification ImplementationCovered entities must implement a reasonable age verification process for all users, classify each user as a minor or adult, and freeze or restrict existing accounts pending verification where required. Age verification data must be minimized, used solely for verification purposes, and deleted immediately upon completion.
0 enacted
20 proposed
MN-01.2
Parental Consent and Account AffiliationWhere a user is a minor, operators must obtain verifiable parental or guardian consent before permitting account creation or access to AI companion products. Minor accounts may be required to be affiliated with a verified parental account.
0 enacted
7 proposed
MN-01.3
Parental Control ToolsOperators must offer minor account holders and their parents or guardians tools to manage privacy and account settings, including interaction data retention preferences, time limits, access-hour controls, and content restrictions. For minors under thirteen, parental tools must be provided directly to parents or guardians.
0 enacted
11 proposed
MN-01.4
Engagement Manipulation Restrictions for MinorsOperators must not provide minor users with points or similar rewards at unpredictable intervals intended to encourage increased engagement, and must not deploy addictive design features (infinite scrolling, autoplay, push notifications, engagement metrics, gamification badges) toward minors.
0 enacted
12 proposed
MN-01.5
Emotional Dependency and Grooming PreventionOperators must institute reasonable measures to prevent AI systems from generating statements that simulate emotional dependence with minor users, including prohibiting claims of sentience, romantic or sexual innuendo, adult-minor romantic role-playing, and sexual objectification of minor account holders.
0 enacted
12 proposed
MN-01.6
Minor Harmful Content BlockingOperators must block minor users from accessing AI interactions involving suicidal ideation prompts, sexually explicit communications, material harmful to minors, and content that encourages self-harm or violence.
0 enacted
14 proposed
MN-01.7
Minor Behavioral Advertising BlockingProfile-based behavioral advertising must not be presented to minors.
0 enacted
0 proposed
MN-01.8
Minor Default Privacy ConfigurationDefault privacy settings for minor users must be configured to the highest level of privacy, including hiding accounts from adult users, disabling search indexing, and blocking unsolicited notifications where applicable.
0 enacted
0 proposed
MN-01.9
Minor Account Termination and Data DeletionOperators must honor minor or parental requests to terminate a minor's account within defined timeframes, permanently delete all associated personal information, and provide accessible tools for account deletion requests.
0 enacted
2 proposed
MN-02
AI Crisis Response Protocols Requires AI system operators to implement crisis detection and referral protocols for users expressing suicidal ideation, self-harm, or intent to harm others.
DeveloperDeployer Consumer TechnologyMental HealthHealthcareChatbot
1
23
Sub-ID
Name & Description
Enacted
Proposed
MN-02.1
Crisis Detection and Referral ProtocolOperators must implement and maintain a defined protocol for AI systems to detect user prompts or expressions involving suicidal ideation, self-harm, or intent to harm others, and to respond by referring users to crisis service providers such as the 988 Suicide and Crisis Lifeline, Crisis Text Line, or equivalent local services. This is a continuous operating requirement — the protocol must be active at all times, not merely documented. Response must be immediate and must not be conditioned on platform engagement or commercial interests.
1 enacted
22 proposed
MN-02.2
Evidence-Based Crisis Response MethodsCrisis detection and response protocols must use evidence-based measurement methods and must prioritize user safety over platform engagement or commercial interests. Operators must adopt and maintain documented protocols specifically governing AI responses to user expressions of suicidal ideation, self-harm, or intent to harm others, including evidence-based methods for tracking incidents, referral counts, and protocol effectiveness. Documentation must be retained and available to regulators upon request.
0 enacted
7 proposed
MN-02.3
Annual Crisis Protocol ReportingOperators must annually report to the applicable enforcement authority (e.g., attorney general) quantitative crisis referral counts and qualitative protocol descriptions related to suicidal ideation, self-harm detection, and harm-prevention measures. Reports must disclose the measurement methodology used and any protocol updates made during the reporting period.
0 enacted
0 proposed
MN-02.4
Minor-Specific Crisis NotificationWhen a minor account holder expresses suicidal ideation or intent to self-harm, operators must notify the affiliated parent or guardian account in addition to providing crisis referral information to the user.
0 enacted
5 proposed