H-0340
VT · State · USA
VT
USA
● Pending
Proposed Effective Date
2025-07-01
Vermont H.340 — An act relating to regulating developers and deployers of certain automated decision systems
VT H.340 regulates developers and deployers of automated decision systems used in consequential decisions affecting Vermont consumers across sectors including employment, housing, credit, healthcare, education, and government services. Core obligations include pre-decision consumer notice and post-decision explanation rights, a mandatory appeal process with qualified human review, mandatory independent audits before deployment and periodically thereafter, detailed reporting to the Attorney General, whistleblower protections for employees, and implementation of a risk management program aligned with the NIST AI RMF. Enforcement is through the Vermont Consumer Protection Act — violations are per se unfair and deceptive acts, enforceable by the AG and through a private right of action for harmed consumers. Notably, developers may not sell or share an automated decision system for consequential use unless it has passed an independent audit for algorithmic discrimination.
Summary

VT H.340 regulates developers and deployers of automated decision systems used in consequential decisions affecting Vermont consumers across sectors including employment, housing, credit, healthcare, education, and government services. Core obligations include pre-decision consumer notice and post-decision explanation rights, a mandatory appeal process with qualified human review, mandatory independent audits before deployment and periodically thereafter, detailed reporting to the Attorney General, whistleblower protections for employees, and implementation of a risk management program aligned with the NIST AI RMF. Enforcement is through the Vermont Consumer Protection Act — violations are per se unfair and deceptive acts, enforceable by the AG and through a private right of action for harmed consumers. Notably, developers may not sell or share an automated decision system for consequential use unless it has passed an independent audit for algorithmic discrimination.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement. Violations constitute unfair and deceptive acts in commerce under 9 V.S.A. § 2453 (Vermont Consumer Protection Act). The Attorney General has authority to adopt rules, conduct civil investigations, enter into assurances of discontinuance, bring civil actions, and take other enforcement actions under chapter 63, subchapter 1. A consumer harmed by a violation is eligible for all remedies provided under the Vermont Consumer Protection Act, which includes a private right of action for consumers.
Penalties
Violations are per se unfair and deceptive acts under the Vermont Consumer Protection Act (9 V.S.A. § 2453). A consumer harmed by a violation is eligible to all remedies provided under the Vermont Consumer Protection Act, which may include actual damages, statutory civil penalties, injunctive relief, and attorney's fees and costs as provided under the CPA. The Vermont CPA authorizes the AG to seek civil penalties up to $10,000 per violation.
Who Is Covered
"Deployer" means a person doing business in this State that uses an automated decision system in a consequential decision in the State or provides an automated decision system for use in a consequential decision by the general public in the State. A developer shall also be considered a deployer if its actions satisfy this definition.
"Developer" means a person doing business in this State that designs, codes, or produces an automated decision system for use in a consequential decision or creates a substantial change with respect to an automated decision system for use in a consequential decision, whether for its own use in the State or for use by a third party in the State.
"Deployer-employer" means a deployer that is an employer.
"Developer-employer" means a developer that is an employer.
What Is Covered
"Automated decision system" means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues an output, including a score, classification, or recommendation. "Automated decision system" does not include any software used primarily for basic computerized processes, such as antimalware, antivirus, autocorrect functions, calculators, databases, data storage, electronic communications, firewall, internet domain registration, website loading, networking, spam and robocall filtering, spellcheck tools, spreadsheets, web caching, web hosting, or any tool that relates only to nonemployment internal management affairs such as ordering office supplies or processing payments, and that do not materially affect the rights, liberties, benefits, safety, or welfare of any individual within the State.
Compliance Obligations 21 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193b
Plain Language
Developers and deployers are categorically prohibited from using, selling, or sharing an automated decision system for consequential decisions if the system produces algorithmic discrimination — meaning differential treatment or impact disfavoring individuals based on a broad list of protected characteristics. This is a strict liability prohibition: the system must not produce discriminatory outcomes regardless of intent. Testing to identify and mitigate discrimination, expanding applicant pools for diversity, and private club exemptions are carved out from the definition of algorithmic discrimination.
Statutory Text
It shall be unlawful discrimination for a developer or deployer to use, sell, or share an automated decision system for use in a consequential decision or a product featuring an automated decision system for use in a consequential decision that produces algorithmic discrimination.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
9 V.S.A. § 4193c(a)-(b)
Plain Language
Before using an automated decision system for a consequential decision, deployers must provide consumers with a clear, conspicuous, multilingual pre-decision notice. The notice must describe which personal characteristics the system measures, how it measures them, their relevance to the decision, what human oversight exists, how the automated components contribute, and a link to a public webpage describing outputs, data sources, and the latest impact assessment results. This is an affirmative pre-use disclosure obligation — it must be delivered before the system is applied to the consumer.
Statutory Text
(a) Any deployer that employs an automated decision system for a consequential decision shall inform the consumer prior to the use of the system for a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that automated decision systems will be used to make a consequential decision or to assist in making a consequential decision. (b) Any notice provided by a deployer to the consumer pursuant to subsection (a) of this section shall include: (1) a description of the personal characteristics or attributes that the system will measure or assess; (2) the method by which the system measures or assesses those attributes or characteristics; (3) how those attributes or characteristics are relevant to the consequential decisions for which the system should be used; (4) any human components of the system; (5) how any automated components of the system are used to inform the consequential decision; and (6) a direct link to a publicly accessible page on the deployer's website that contains a plain-language description of the: (A) system's outputs; (B) types and sources of data collected from natural persons and processed by the system when it is used to make, or assists in making, a consequential decision; and (C) results of the most recent impact assessment, or an active link to a web page where a consumer can review those results.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.2 · Deployer · Automated Decisionmaking
9 V.S.A. § 4193c(c)
Plain Language
After a consequential decision is made using an automated decision system, deployers must provide the consumer with a single post-decision notice explaining the principal reasons for the decision. The notice must identify the developer, describe the system's output, explain how the system contributed to the decision, identify data types and sources used, explain in plain language how the consumer's personal data informed the outcome, and describe what actions the consumer could have taken or can take to secure a different result. This is an individualized explanation obligation — not a generic disclosure — and must be specific to the consumer's actual decision.
Statutory Text
(c) Any deployer that employs an automated decision system for a consequential decision shall provide the consumer with a single notice containing a plain-language explanation of the decision that identifies the principal reason or reasons for the consequential decision, including: (1) the identity of the developer of the automated decision system used in the consequential decision, if the deployer is not also the developer; (2) a description of what the output of the automated decision system is, such as a score, recommendation, or other similar description; (3) the degree and manner to which the automated decision system contributed to the decision; (4) the types and sources of data processed by the automated decision system in making the consequential decision; (5) a plain language explanation of how the consumer's personal data informed the consequential decision; and (6) what actions, if any, the consumer might have taken to secure a different decision and the actions that the consumer might take to secure a different decision in the future.
H-01 Human Oversight of Automated Decisions · H-01.4H-01.5 · Deployer · Automated Decisionmaking
9 V.S.A. § 4193c(d)
Plain Language
Deployers must establish and explain an appeal process allowing consumers to formally contest a consequential automated decision, submit supporting information, and obtain meaningful human review. The human reviewer must be trained, impartial, free from conflicts of interest, not involved in the original decision, protected from retaliation for their review decisions, and given sufficient resources. The reviewer must consider consumer-submitted information and may consider other relevant sources. Deployers must respond within 45 days, extendable once by 45 days for complexity, with notice of any extension. This is a robust procedural due process requirement that goes beyond simple human review on request.
Statutory Text
(d)(1) A deployer shall provide and explain a process for a consumer to appeal a decision, which shall at minimum allow the consumer to: (A) formally contest the decision; (B) provide information to support their position; and (C) obtain meaningful human review of the decision. (2) For an appeal made pursuant to subdivision (1) of this subsection: (A) a deployer shall designate a human reviewer who: (i) is trained and qualified to understand the consequential decision being appealed, the consequences of the decision for the consumer, how to evaluate and how to serve impartially, including by avoiding prejudgment of the facts at issue, conflict of interest, and bias; (ii) does not have a conflict of interest for or against the deployer or the consumer; (iii) was not involved in the initial decision being appealed; (iv) shall enjoy protection from dismissal or its equivalent, disciplinary measures, or other adverse treatment for exercising their functions under this section; and (v) shall be allocated sufficient human resources by the deployer to conduct an effective appeal of the decision; and (B) the human reviewer shall consider the information provided by the consumer in their appeal and may consider other sources of information relevant to the consequential decision. (3) A deployer shall respond to a consumer's appeal not later than 45 after receipt of the appeal. That period may be extended once by an additional 45 days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the consumer of any extension not later than 45 days after receipt of the appeal, together with the reasons for the delay.
Other · Automated Decisionmaking
9 V.S.A. § 4193c(e)
Plain Language
This provision declares that both developers and deployers are legally responsible for the quality, accuracy, bias, and discrimination outcomes of their automated decision systems used in consequential decisions. It establishes a liability standard but does not prescribe specific compliance actions beyond those required elsewhere in the subchapter. It creates no new independent obligation.
Statutory Text
(e) The deployer or developer of an automated decision system is legally responsible for the quality and accuracy of all consequential decisions made, including any bias or algorithmic discrimination resulting from the operation of the automated decision system.
H-02 Non-Discrimination & Bias Assessment · H-02.6 · Developer · Automated Decisionmaking
9 V.S.A. § 4193c(f)
Plain Language
Developers may not use, sell, or share an automated decision system for consequential decisions unless the system has passed an independent audit under § 4193e. If the audit reveals algorithmic discrimination, the developer must halt all use, sale, or sharing until a post-adjustment audit confirms the discrimination has been rectified. This is a deployment-gating obligation — no system may enter the market without clearing the independent audit, and a discriminatory finding triggers a mandatory stop-ship until remediation is verified.
Statutory Text
(f) A developer shall not use, sell, or share an automated decision system for use in a consequential decision or a product featuring an automated decision system for use in a consequential decision that has not passed an independent audit, in accordance with section 4193e of this title. If an independent audit finds that an automated decision system for use in a consequential decision does produce algorithmic discrimination, the developer shall not use, sell, or share the system until the algorithmic discrimination has been proven to be rectified by a post-adjustment audit.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193d(a)
Plain Language
Developer-employers and deployer-employers may not prevent employees — including former employees and independent contractors — from disclosing information to the Attorney General if the employee reasonably believes the information indicates a violation of this subchapter. They also may not retaliate against employees for making such disclosures. Employment terms, NDAs, and conditions of employment cannot be used to block protected disclosures. The protected disclosure channel is specifically to the Attorney General, not an internal process.
Statutory Text
(a) Developer-employers and deployer-employers of automated decision systems used in consequential decisions shall not: (1) prevent an employee from disclosing information to the Attorney General, including through terms and conditions of employment or seeking to enforce terms and conditions of employment, if the employee has reasonable cause to believe the information indicates a violation of this subchapter; or (2) retaliate against an employee for disclosing information to the Attorney General pursuant to subdivision (1) of this subsection.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.4 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193d(b)
Plain Language
Developer-employers and deployer-employers must provide clear notice to all employees working on automated decision systems of their rights and responsibilities under this subchapter, including the right of contractor and subcontractor employees to use the developer's internal disclosure process. A safe harbor presumption of compliance applies if the employer either (1) continuously posts workplace notices, onboards new employees with equivalent notice, and periodically notifies remote workers, or (2) provides annual written notice received and acknowledged by all employees.
Statutory Text
(b) Developer-employers and deployer-employers of automated decision systems used in consequential decisions shall provide a clear notice to all employees working on automated decision systems of their rights and responsibilities under this subchapter, including the right of employees of contractors and subcontractors to use the developer's internal process for making protected disclosures pursuant to subsection (c) of this section. A developer-employer or deployer-employer is presumed to be in compliance with the requirements of this subsection if the developer-employer or deployer-employer does either of the following: (1) at all times: (A) posts and displays within all workplaces maintained by the developer-employer or deployer-employer a notice to all employees of their rights and responsibilities under this subchapter; (B) ensures that all new employees receive equivalent notice; and (C) ensures that employees who work remotely periodically receive an equivalent notice; or (2) not less frequently than once every year, provides written notice to all employees of their rights and responsibilities under this subchapter and ensures that the notice is received and acknowledged by all of those employees.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.1 · Developer · Automated Decisionmaking
9 V.S.A. § 4193d(c)
Plain Language
Each developer-employer must maintain a reasonable internal anonymous disclosure channel for employees who believe in good faith that the developer has violated this subchapter or any other law, made false or misleading safety/security statements, or failed to disclose known risks. The process must include at minimum monthly status updates to the disclosing employee on the investigation and any responsive actions. Note this obligation falls on developer-employers only — deployer-employers are not separately required to maintain an internal anonymous process.
Statutory Text
(c) Each developer-employer shall provide a reasonable internal process through which an employee may anonymously disclose information to the developer if the employee believes in good faith that the information indicates that the developer has violated any provision of this subchapter or any other law, or has made false or materially misleading statements related to its safety and security protocol, or failed to disclose known risks to employees, including, at a minimum, a monthly update to the person who made the disclosure regarding the status of the developer's investigation of the disclosure and the actions taken by the developer in response to the disclosure.
H-02 Non-Discrimination & Bias Assessment · H-02.6H-02.7 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193e(a)-(c)
Plain Language
Developers and deployers are jointly responsible for ensuring an independent audit is conducted at three stages: before deployment, six months after deployment, and at least every 18 months thereafter. The audit must cover data management and security compliance, system validity and reliability per use case, comparative demographic performance analysis for algorithmic discrimination, compliance with federal/state/local labor, civil rights, consumer protection, and privacy laws, and an evaluation of the risk management program. All completed audits must be delivered to the Attorney General regardless of findings. Developer and deployer must contractually allocate audit responsibilities; absent a contract, they are jointly and severally liable. Multiple auditors may be used.
Statutory Text
(a) Prior to deployment of an automated decision system for use in a consequential decision, six months after deployment, and at least every 18 months thereafter for each calendar year an automated decision system is in use in consequential decisions after the first post-deployment audit, the developer and deployer shall be jointly responsible for ensuring that an independent audit is conducted in compliance with the provisions of this section to ensure that the product does not produce algorithmic discrimination and complies with the provisions of this subchapter. The developer and deployer shall enter into a contract specifying which party is responsible for the costs, oversight, and results of the audit. Absent an agreement of responsibility through contract, the developer and deployer shall be jointly and severally liable for any violations of this section. Regardless of final findings, the deployer or developer shall deliver all audits conducted under this section to the Attorney General. (b) A deployer or developer may contract with more than one auditor to fulfill the requirements of this section. (c) The audit shall include the following: (1) an analysis of data management policies, including whether personal or sensitive data relating to a consumer is subject to data security protection standards that comply with the requirements of applicable State law; (2) an analysis of the system validity and reliability according to each specified use case listed in the entity's reporting document filed by the developer or deployer pursuant to section 4193f of this title; (3) a comparative analysis of the system's performance when used on consumers of different demographic groups and a determination of whether the system produces algorithmic discrimination in violation of this subchapter by each intended and foreseeable identified use as identified by the deployer and developer pursuant to section 4193f of this title; (4) an analysis of how the technology complies with existing relevant federal, State, and local labor, civil rights, consumer protection, privacy, and data privacy laws; and (5) an evaluation of the developer's or deployer's documented risk management policy and program as set forth in section 4193g of this title for conformity with subsection 4193g(a) of this title.
H-02 Non-Discrimination & Bias Assessment · H-02.6 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193e(f)-(g)
Plain Language
The audit must be completed entirely without AI assistance. The auditor must be truly independent — disqualified if they have provided any service to the commissioning company in the past 12 months, were involved in building or deploying the system, have an employment relationship with the developer or deployer, or have any direct or material indirect financial interest in them. Audit fees cannot be contingent on results, and no incentives or bonuses for positive findings are permitted. These are among the most stringent auditor independence requirements in any state ADS statute.
Statutory Text
(f) An audit conducted under this section shall be completed in its entirety without the assistance of an automated decision system. (g)(1) An auditor shall be an independent entity, including an individual, nonprofit, firm, corporation, partnership, cooperative, or association. (2) For the purposes of this subchapter, no auditor may be commissioned by a developer or deployer of an automated decision system used in consequential decisions if the auditor: (A) has already been commissioned to provide any auditing or nonauditing service, including financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past 12 months; (B) is or was involved in using, developing, integrating, offering, licensing, or deploying the automated decision system; (C) has or had an employment relationship with a developer or deployer that uses, offers, or licenses the automated decision system; or (D) has or had a direct financial interest or a material indirect financial interest in a developer or deployer that uses, offers, or licenses the automated decision system. (3) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result.
R-02 Regulatory Disclosure & Submissions · R-02.1 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193f(a)-(b)
Plain Language
Developers and deployers must file reports with the Attorney General before deployment and then annually or upon each substantial change, whichever comes first. Each report must be accompanied by the most recent independent audit and a legal attestation that either the system complies with the subchapter, or that it may violate or does violate provisions but includes a remediation plan and summary. This reporting is mandatory regardless of audit findings — even a system found to have issues must be reported with a remediation plan rather than withheld.
Statutory Text
(a) Every developer and deployer of an automated decision system used in a consequential decision shall comply with the reporting requirements of this section. Regardless of final findings, reports shall be filed with the Attorney General prior to deployment of an automated decision system used in a consequential decision and then annually, or after each substantial change to the system, whichever comes first. (b) Together with each report required to be filed under this section, developers and deployers shall file with the Attorney General a copy of the last completed independent audit required by this subchapter and a legal attestation that the automated decision system used in a consequential decision: (1) does not violate any provision of this subchapter; or (2) may violate or does violate one or more provisions of this article, that there is a plan of remediation to bring the automated decision system into compliance with this subchapter, and a summary of the plan of remediation.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Developer · Automated Decisionmaking
9 V.S.A. § 4193f(c)
Plain Language
Developers must file a detailed report with the Attorney General covering nine categories: system description (including software stack, purpose, and intended uses), intended outputs and permissible secondary uses, training methods and data (including preprocessing, dataset descriptions, data quality, breadth assessment, and legal compliance steps), data management policies, information necessary for deployer compliance monitoring, system capabilities and limitations including safeguards, an internal risk assessment covering discrimination, reliability, privacy, and security risks with mitigation testing, and monitoring recommendations. This is an exceptionally comprehensive developer reporting obligation that combines training data disclosure, model documentation, and risk assessment into a single filing.
Statutory Text
(c) Developers of automated decision systems shall file with the Attorney General a report containing the following: (1) a description of the system including: (A) a description of the system's software stack; (B) the purpose of the system and its expected benefits; and (C) the system's current and intended uses, including what consequential decisions it will support and what stakeholders will be impacted; (2) the intended outputs of the system and whether the outputs can be or are otherwise appropriate to be used for any purpose not previously articulated; (3) the methods for training of their models including: (A) any pre-processing steps taken to prepare datasets for the training of a model underlying an automated decision system; (B) descriptions of the datasets upon which models were trained and evaluated, how and why datasets were collected and the sources of those datasets, and how that training data will be used and maintained; (C) the quality and appropriateness of the data used in the automated decision system's design, development, testing, and operation; (D) whether the data contains sufficient breadth to address the range of real-world inputs the automated decision system might encounter and how any data gaps have been addressed; and (E) steps taken to ensure compliance with privacy, data privacy, data security, and copyright laws; (4) use and data management policies; (5) any other information necessary to allow the deployer to understand the outputs and monitor the system for compliance with this subchapter; (6) any other information necessary to allow the deployer to comply with the requirements of subsection (d) of this section; (7) a description of the system's capabilities and any developer-imposed limitations, including capabilities outside of its intended use, when the system should not be used, any safeguards or guardrails in place to protect against unintended, inappropriate, or disallowed uses, and testing of any safeguards or guardrails; (8) an internal risk assessment including documentation and results of testing conducted to identify all reasonably foreseeable risks related to algorithmic discrimination, validity and reliability, privacy and autonomy, and safety and security, as well as actions taken to address those risks, and subsequent testing to assess the efficacy of actions taken to address risks; and (9) whether the system should be monitored and, if so, how the system should be monitored.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated Decisionmaking
9 V.S.A. § 4193f(d)
Plain Language
Deployers must file a separate detailed report with the Attorney General covering eight categories: system description, intended outputs, revenue/monetization plans, the system's decision-making role (autonomous vs. supportive), capabilities and limitations with safeguards, a consumer cost-benefit assessment, an internal risk assessment covering discrimination, accuracy, privacy, and security risks with mitigation documentation, and monitoring plans. The deployer report differs from the developer report by including monetization disclosure and consumer cost-benefit analysis rather than training data methodology. Both reports are mandatory and must be filed on the same schedule (pre-deployment and annually or upon substantial change).
Statutory Text
(d) Deployers of automated decision systems used in consequential decisions shall file with the Attorney General a report containing the following: (1) a description of the system, including: (A) a description of the system's software stack; (B) the purpose of the system and its expected benefits; and (C) the system's current and intended uses, including what consequential decisions it will support and what stakeholders will be impacted; (2) the intended outputs of the system and whether the outputs can be or are otherwise appropriate to be used for any purpose not previously articulated; (3) whether the deployer collects revenue or plans to collect revenue from use of the automated decision system in a consequential decision and, if so, how it monetizes or plans to monetize use of the system; (4) whether the system is designed to make consequential decisions itself or whether and how it supports consequential decisions; (5) a description of the system's capabilities and any deployer-imposed limitations, including capabilities outside of its intended use, when the system should not be used, any safeguards or guardrails in place to protect against unintended, inappropriate, or disallowed uses, and testing of any safeguards or guardrails; (6) an assessment of the relative benefits and costs to the consumer given the system's purpose, capabilities, and probable use cases; (7) an internal risk assessment including documentation and results of testing conducted to identify all reasonably foreseeable risks related to algorithmic discrimination, accuracy and reliability, privacy and autonomy, and safety and security, as well as actions taken to address those risks, and subsequent testing to assess the efficacy of actions taken to address risks; and (8) whether the system should be monitored and, if so, how the system should be monitored.
G-02 Public Transparency & Documentation · G-02.4 · Government · Automated Decisionmaking
9 V.S.A. § 4193f(e)(2)
Plain Language
The Attorney General must maintain a publicly accessible online database containing all reports and audits filed under this subchapter, redacted where appropriate, and updated biannually. This creates an indirect public transparency obligation for developers and deployers — their filings will be publicly accessible through the AG's database. While the direct obligation falls on the AG, developers and deployers should assume their reports and audit results will be publicly available in redacted form.
Statutory Text
(e) The Attorney General shall: ... (2) maintain an online database that is accessible to the general public with reports, redacted in accordance with this section, and audits required by this subchapter, which shall be updated biannually.
R-02 Regulatory Disclosure & Submissions · R-02.1 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193f(f)
Plain Language
Systems already deployed for consequential decisions as of July 1, 2025 have an 18-month grace period — developers and deployers must complete and file all required reports and the independent audit no later than January 1, 2027. This is a transition provision for existing systems that would otherwise be required to have pre-deployment audits and reports that cannot retroactively be produced.
Statutory Text
(f) For automated decision systems already in deployment for use in consequential decisions on or before July 1, 2025, developers and deployers shall not later than 18 months after July 1, 2025 complete and file the reports and complete the independent audit required by this subchapter.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193g(a)-(b)
Plain Language
Every developer and deployer must plan, document, and implement a risk management policy and program governing their automated decision systems. The program must identify, document, and mitigate known or foreseeable risks of algorithmic discrimination, and must be iteratively reviewed and updated over the system's lifecycle. Reasonableness is assessed against the NIST AI RMF v1.0 (or a later version if the AG determines it is at least as stringent), the entity's size and complexity, the system's nature and scope, and data sensitivity and volume. A single program may cover multiple systems if sufficient. The NIST AI RMF reference functions as a reasonableness benchmark rather than a strict safe harbor.
Statutory Text
(a) Each developer or deployer of automated decision systems used in consequential decisions shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of the automated decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under section 4193b of this title. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of an automated decision system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this subsection shall be reasonable considering the: (1) guidance and standards set forth in version 1.0 of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology in the U.S. Department of Commerce, or the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology if, in the Attorney General's discretion, the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology in the U.S. Department of Commerce is at least as stringent as version 1.0; (2) size and complexity of the developer or deployer; (3) nature, scope, and intended uses of the automated decision system developed or deployed for use in consequential decisions; and (4) sensitivity and volume of data processed in connection with the automated decision system. (b) A risk management policy and program implemented pursuant to subsection (a) of this section may cover multiple automated decision systems developed by the same developer or deployed by the same deployer for use in consequential decisions if sufficient.
R-02 Regulatory Disclosure & Submissions · R-02.2 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193g(c)
Plain Language
The Attorney General may at any time require developers or deployers to disclose their risk management policy and program in a prescribed form, and may evaluate the program for compliance. This is a regulatory-on-demand disclosure obligation — entities must be prepared to produce their risk management documentation upon AG request and in the AG's specified format.
Statutory Text
(c) The Attorney General may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subsection (a) of this section in a form and manner prescribed by the Attorney General. The Attorney General may evaluate the risk management policy and program to ensure compliance with this section.
Other · Automated Decisionmaking
9 V.S.A. § 4193h(a)
Plain Language
This provision makes any violation of the subchapter a per se unfair and deceptive trade practice under Vermont's Consumer Protection Act and grants harmed consumers access to all CPA remedies. It creates no new compliance obligation — it activates the enforcement framework for all the other obligations in the subchapter.
Statutory Text
(a) A person who violates this subchapter or rules adopted pursuant to this subchapter commits an unfair and deceptive act in commerce in violation of section 2453 of this title (Vermont Consumer Protection Act). A consumer harmed by a violation is eligible to all remedies provided under the Vermont Consumer Protection Act.
Other · Automated Decisionmaking
9 V.S.A. § 4193h(b)
Plain Language
This provision grants the Attorney General authority to adopt implementing rules, investigate, negotiate consent agreements, and bring civil enforcement actions under the same powers available under the general consumer protection enforcement framework. It does not create any new compliance obligation for developers or deployers.
Statutory Text
(b) The Attorney General has the same authority to adopt rules to implement the provisions of this section and to conduct civil investigations, enter into assurances of discontinuance, bring civil actions, and take other enforcement actions as provided under chapter 63, subchapter 1 of this title.
Other · Automated Decisionmaking
9 V.S.A. § 4193c(g)
Plain Language
The consumer notice, explanation, and appeal rights under § 4193c cannot be waived by contract or agreement. The only exception is subsection 4193e(a), which allows developer-deployer contracts to allocate audit responsibilities. This anti-waiver provision prevents deployers from using terms of service to disclaim their disclosure and appeal obligations.
Statutory Text
(g) Except as provided in subsection 4193e(a) of this title, the rights and obligations under this section may not be waived by any person, partnership, association, or corporation.