H-0340
VT · State · USA
VT
USA
● Pending
Proposed Effective Date
2025-07-01
Vermont H.340 — An act relating to regulating developers and deployers of certain automated decision systems
Vermont H.340 regulates developers and deployers of automated decision systems used in consequential decisions affecting Vermont consumers across education, employment, housing, healthcare, financial services, government services, and other high-stakes domains. The bill prohibits algorithmic discrimination, requires pre-decision consumer notice with detailed disclosures about how the system operates, mandates post-decision explanations and a meaningful human appeal process, and requires mandatory independent audits before deployment and periodically thereafter. Developers and deployers must file detailed reports with the Attorney General, maintain a risk management program aligned with the NIST AI RMF, and provide whistleblower protections. Enforcement is through the Vermont Consumer Protection Act, with both Attorney General enforcement authority and a private right of action for harmed consumers.
Summary

Vermont H.340 regulates developers and deployers of automated decision systems used in consequential decisions affecting Vermont consumers across education, employment, housing, healthcare, financial services, government services, and other high-stakes domains. The bill prohibits algorithmic discrimination, requires pre-decision consumer notice with detailed disclosures about how the system operates, mandates post-decision explanations and a meaningful human appeal process, and requires mandatory independent audits before deployment and periodically thereafter. Developers and deployers must file detailed reports with the Attorney General, maintain a risk management program aligned with the NIST AI RMF, and provide whistleblower protections. Enforcement is through the Vermont Consumer Protection Act, with both Attorney General enforcement authority and a private right of action for harmed consumers.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement. Violations constitute unfair and deceptive acts in commerce under 9 V.S.A. § 2453 (Vermont Consumer Protection Act). The Attorney General may conduct civil investigations, enter into assurances of discontinuance, bring civil actions, and take other enforcement actions under chapter 63, subchapter 1 of Title 9. A consumer harmed by a violation is eligible to all remedies provided under the Vermont Consumer Protection Act, which includes a private right of action for consumers. No cure period is specified.
Penalties
Violations are enforceable as unfair and deceptive acts under the Vermont Consumer Protection Act (9 V.S.A. § 2453). A consumer harmed by a violation is eligible to all remedies provided under the Vermont Consumer Protection Act, which may include actual damages, injunctive relief, and attorney's fees and costs. The Vermont CPA also provides for civil penalties in AG-initiated actions.
Who Is Covered
"Deployer" means a person doing business in this State that uses an automated decision system in a consequential decision in the State or provides an automated decision system for use in a consequential decision by the general public in the State. A developer shall also be considered a deployer if its actions satisfy this definition.
"Developer" means a person doing business in this State that designs, codes, or produces an automated decision system for use in a consequential decision or creates a substantial change with respect to an automated decision system for use in a consequential decision, whether for its own use in the State or for use by a third party in the State.
"Deployer-employer" means a deployer that is an employer.
"Developer-employer" means a developer that is an employer.
What Is Covered
"Automated decision system" means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues an output, including a score, classification, or recommendation. "Automated decision system" does not include any software used primarily for basic computerized processes, such as antimalware, antivirus, autocorrect functions, calculators, databases, data storage, electronic communications, firewall, internet domain registration, website loading, networking, spam and robocall filtering, spellcheck tools, spreadsheets, web caching, web hosting, or any tool that relates only to nonemployment internal management affairs such as ordering office supplies or processing payments, and that do not materially affect the rights, liberties, benefits, safety, or welfare of any individual within the State.
Compliance Obligations 17 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193b
Plain Language
Developers and deployers are categorically prohibited from using, selling, or sharing an automated decision system (or a product featuring one) for consequential decisions if it produces algorithmic discrimination. The prohibition covers differential treatment or disparate impact across a broad list of protected characteristics. Safe harbors exist for internal testing to identify and mitigate discrimination, expanding applicant pools for diversity purposes, and private clubs not open to the public. This is a strict liability-style prohibition — the system need only 'produce' discrimination, not intend it.
Statutory Text
It shall be unlawful discrimination for a developer or deployer to use, sell, or share an automated decision system for use in a consequential decision or a product featuring an automated decision system for use in a consequential decision that produces algorithmic discrimination.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
9 V.S.A. § 4193c(a)-(b)
Plain Language
Before using an automated decision system for a consequential decision, deployers must provide consumers with a detailed pre-decision notice. The notice must be clear, conspicuous, consumer-friendly, and available in each language the deployer offers its services in. It must describe what personal characteristics the system measures, how it measures them, why they are relevant, what human and automated components contribute to the decision, and include a link to a public webpage with descriptions of the system's outputs, data types and sources, and the most recent impact assessment results. This is a proactive disclosure obligation — it must happen before the system is used, not after.
Statutory Text
(a) Any deployer that employs an automated decision system for a consequential decision shall inform the consumer prior to the use of the system for a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that automated decision systems will be used to make a consequential decision or to assist in making a consequential decision. (b) Any notice provided by a deployer to the consumer pursuant to subsection (a) of this section shall include: (1) a description of the personal characteristics or attributes that the system will measure or assess; (2) the method by which the system measures or assesses those attributes or characteristics; (3) how those attributes or characteristics are relevant to the consequential decisions for which the system should be used; (4) any human components of the system; (5) how any automated components of the system are used to inform the consequential decision; and (6) a direct link to a publicly accessible page on the deployer's website that contains a plain-language description of the: (A) system's outputs; (B) types and sources of data collected from natural persons and processed by the system when it is used to make, or assists in making, a consequential decision; and (C) results of the most recent impact assessment, or an active link to a web page where a consumer can review those results.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.2 · Deployer · Automated Decisionmaking
9 V.S.A. § 4193c(c)
Plain Language
After a consequential decision is made using an automated decision system, the deployer must provide the consumer with a single, plain-language notice explaining the decision. The notice must identify the principal reasons for the decision, the developer of the system (if different from the deployer), what the system's output was, how much the system contributed to the decision, what data was processed, how the consumer's personal data specifically informed the outcome, and what the consumer could have done or could do in the future to secure a different decision. This is a post-decision explanation obligation, distinct from the pre-decision notice in subsection (a).
Statutory Text
(c) Any deployer that employs an automated decision system for a consequential decision shall provide the consumer with a single notice containing a plain-language explanation of the decision that identifies the principal reason or reasons for the consequential decision, including: (1) the identity of the developer of the automated decision system used in the consequential decision, if the deployer is not also the developer; (2) a description of what the output of the automated decision system is, such as a score, recommendation, or other similar description; (3) the degree and manner to which the automated decision system contributed to the decision; (4) the types and sources of data processed by the automated decision system in making the consequential decision; (5) a plain language explanation of how the consumer's personal data informed the consequential decision; and (6) what actions, if any, the consumer might have taken to secure a different decision and the actions that the consumer might take to secure a different decision in the future.
H-01 Human Oversight of Automated Decisions · H-01.4H-01.5 · Deployer · Automated Decisionmaking
9 V.S.A. § 4193c(d)
Plain Language
Deployers must provide consumers with a meaningful appeal process for consequential decisions. At minimum, consumers must be able to formally contest the decision, submit supporting information, and obtain human review. The human reviewer must be trained, impartial, free of conflicts of interest, not involved in the initial decision, and protected from retaliation for exercising their review functions. The deployer must allocate sufficient resources for effective appeals. The deployer must respond within 45 days, with one possible 45-day extension for complex or high-volume appeals. This is an unusually detailed human review requirement — it specifies reviewer qualifications, independence, and anti-retaliation protections, going beyond most comparable state frameworks.
Statutory Text
(d)(1) A deployer shall provide and explain a process for a consumer to appeal a decision, which shall at minimum allow the consumer to: (A) formally contest the decision; (B) provide information to support their position; and (C) obtain meaningful human review of the decision. (2) For an appeal made pursuant to subdivision (1) of this subsection: (A) a deployer shall designate a human reviewer who: (i) is trained and qualified to understand the consequential decision being appealed, the consequences of the decision for the consumer, how to evaluate and how to serve impartially, including by avoiding prejudgment of the facts at issue, conflict of interest, and bias; (ii) does not have a conflict of interest for or against the deployer or the consumer; (iii) was not involved in the initial decision being appealed; (iv) shall enjoy protection from dismissal or its equivalent, disciplinary measures, or other adverse treatment for exercising their functions under this section; and (v) shall be allocated sufficient human resources by the deployer to conduct an effective appeal of the decision; and (B) the human reviewer shall consider the information provided by the consumer in their appeal and may consider other sources of information relevant to the consequential decision. (3) A deployer shall respond to a consumer's appeal not later than 45 after receipt of the appeal. That period may be extended once by an additional 45 days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the consumer of any extension not later than 45 days after receipt of the appeal, together with the reasons for the delay.
Other · Automated Decisionmaking
9 V.S.A. § 4193c(e)
Plain Language
Developers and deployers are legally responsible for the quality and accuracy of all consequential decisions made by their automated decision systems, including any bias or algorithmic discrimination that results. This is a broad liability assignment — it makes the developer or deployer answerable for outcomes regardless of which specific obligation is at issue. It reinforces that neither party can disclaim responsibility for the system's outputs.
Statutory Text
(e) The deployer or developer of an automated decision system is legally responsible for the quality and accuracy of all consequential decisions made, including any bias or algorithmic discrimination resulting from the operation of the automated decision system.
H-02 Non-Discrimination & Bias Assessment · H-02.6 · Developer · Automated Decisionmaking
9 V.S.A. § 4193c(f)
Plain Language
Developers are prohibited from using, selling, or sharing an automated decision system for consequential decisions unless it has passed an independent audit under § 4193e. If the audit finds algorithmic discrimination, the developer must halt all distribution until a post-adjustment audit confirms the discrimination has been rectified. This creates a deployment gate — no system may enter the market for consequential decisions without first passing an independent audit, and a finding of discrimination triggers an automatic halt-and-fix obligation.
Statutory Text
(f) A developer shall not use, sell, or share an automated decision system for use in a consequential decision or a product featuring an automated decision system for use in a consequential decision that has not passed an independent audit, in accordance with section 4193e of this title. If an independent audit finds that an automated decision system for use in a consequential decision does produce algorithmic discrimination, the developer shall not use, sell, or share the system until the algorithmic discrimination has been proven to be rectified by a post-adjustment audit.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193d(a)
Plain Language
Developer-employers and deployer-employers may not prevent employees — including former employees and independent contractors — from disclosing suspected violations of this subchapter to the Attorney General, whether through employment terms, NDAs, or enforcement of contractual restrictions. They also may not retaliate against employees who make such disclosures. This is a broad anti-retaliation provision covering both suppression (preventing disclosure) and punishment (retaliating after disclosure).
Statutory Text
(a) Developer-employers and deployer-employers of automated decision systems used in consequential decisions shall not: (1) prevent an employee from disclosing information to the Attorney General, including through terms and conditions of employment or seeking to enforce terms and conditions of employment, if the employee has reasonable cause to believe the information indicates a violation of this subchapter; or (2) retaliate against an employee for disclosing information to the Attorney General pursuant to subdivision (1) of this subsection.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.4 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193d(b)
Plain Language
Developer-employers and deployer-employers must provide clear notice to all employees working on automated decision systems about their rights and responsibilities under the subchapter, including the right of contractor and subcontractor employees to use the internal whistleblower process. A safe harbor presumption of compliance exists if the employer either (1) continuously posts workplace notices, ensures new employees receive notice, and periodically notifies remote workers, or (2) provides written notice at least annually to all employees with receipt acknowledgment. The safe harbor is a presumption, not a guarantee — an employer following it is presumed compliant but could still face challenge.
Statutory Text
(b) Developer-employers and deployer-employers of automated decision systems used in consequential decisions shall provide a clear notice to all employees working on automated decision systems of their rights and responsibilities under this subchapter, including the right of employees of contractors and subcontractors to use the developer's internal process for making protected disclosures pursuant to subsection (c) of this section. A developer-employer or deployer-employer is presumed to be in compliance with the requirements of this subsection if the developer-employer or deployer-employer does either of the following: (1) at all times: (A) posts and displays within all workplaces maintained by the developer-employer or deployer-employer a notice to all employees of their rights and responsibilities under this subchapter; (B) ensures that all new employees receive equivalent notice; and (C) ensures that employees who work remotely periodically receive an equivalent notice; or (2) not less frequently than once every year, provides written notice to all employees of their rights and responsibilities under this subchapter and ensures that the notice is received and acknowledged by all of those employees.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.1 · Developer · Automated Decisionmaking
9 V.S.A. § 4193d(c)
Plain Language
Each developer-employer must maintain a reasonable internal process for employees to anonymously report suspected violations of this subchapter, other laws, false statements about safety and security protocols, or undisclosed known risks. The process must provide at least monthly status updates to the disclosing employee on the investigation and any responsive actions. This obligation falls only on developer-employers — not deployer-employers — and is notable for its breadth: it covers not just violations of this subchapter but of any other law, plus false safety statements and risk concealment.
Statutory Text
(c) Each developer-employer shall provide a reasonable internal process through which an employee may anonymously disclose information to the developer if the employee believes in good faith that the information indicates that the developer has violated any provision of this subchapter or any other law, or has made false or materially misleading statements related to its safety and security protocol, or failed to disclose known risks to employees, including, at a minimum, a monthly update to the person who made the disclosure regarding the status of the developer's investigation of the disclosure and the actions taken by the developer in response to the disclosure.
H-02 Non-Discrimination & Bias Assessment · H-02.6H-02.7 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193e(a)-(g)
Plain Language
Developers and deployers are jointly responsible for ensuring that an independent audit is conducted before deployment, six months after deployment, and at least every 18 months thereafter. The audit must cover data management policies, system validity and reliability by use case, comparative demographic performance analysis for algorithmic discrimination, compliance with existing laws, and evaluation of the risk management program. All audits must be delivered to the Attorney General regardless of findings. The auditor must be truly independent — no prior service relationship within 12 months, no involvement in the system, no employment or financial interest in the developer or deployer. The audit must be performed entirely without AI assistance. Auditor fees cannot be contingent on results, and no incentives may be offered for positive findings. Developer and deployer must contractually allocate audit responsibilities; absent agreement, they are jointly and severally liable.
Statutory Text
(a) Prior to deployment of an automated decision system for use in a consequential decision, six months after deployment, and at least every 18 months thereafter for each calendar year an automated decision system is in use in consequential decisions after the first post-deployment audit, the developer and deployer shall be jointly responsible for ensuring that an independent audit is conducted in compliance with the provisions of this section to ensure that the product does not produce algorithmic discrimination and complies with the provisions of this subchapter. The developer and deployer shall enter into a contract specifying which party is responsible for the costs, oversight, and results of the audit. Absent an agreement of responsibility through contract, the developer and deployer shall be jointly and severally liable for any violations of this section. Regardless of final findings, the deployer or developer shall deliver all audits conducted under this section to the Attorney General. (b) A deployer or developer may contract with more than one auditor to fulfill the requirements of this section. (c) The audit shall include the following: (1) an analysis of data management policies, including whether personal or sensitive data relating to a consumer is subject to data security protection standards that comply with the requirements of applicable State law; (2) an analysis of the system validity and reliability according to each specified use case listed in the entity's reporting document filed by the developer or deployer pursuant to section 4193f of this title; (3) a comparative analysis of the system's performance when used on consumers of different demographic groups and a determination of whether the system produces algorithmic discrimination in violation of this subchapter by each intended and foreseeable identified use as identified by the deployer and developer pursuant to section 4193f of this title; (4) an analysis of how the technology complies with existing relevant federal, State, and local labor, civil rights, consumer protection, privacy, and data privacy laws; and (5) an evaluation of the developer's or deployer's documented risk management policy and program as set forth in section 4193g of this title for conformity with subsection 4193g(a) of this title. (e) The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer pursuant to section 4193f of this title. (f) An audit conducted under this section shall be completed in its entirety without the assistance of an automated decision system. (g)(1) An auditor shall be an independent entity, including an individual, nonprofit, firm, corporation, partnership, cooperative, or association. (2) For the purposes of this subchapter, no auditor may be commissioned by a developer or deployer of an automated decision system used in consequential decisions if the auditor: (A) has already been commissioned to provide any auditing or nonauditing service, including financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past 12 months; (B) is or was involved in using, developing, integrating, offering, licensing, or deploying the automated decision system; (C) has or had an employment relationship with a developer or deployer that uses, offers, or licenses the automated decision system; or (D) has or had a direct financial interest or a material indirect financial interest in a developer or deployer that uses, offers, or licenses the automated decision system. (3) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result.
R-02 Regulatory Disclosure & Submissions · R-02.1 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193f(a)-(b)
Plain Language
Every developer and deployer must file reports with the Attorney General before deployment and then annually or after each substantial change, whichever comes first. Each report must be accompanied by the most recent independent audit and a legal attestation either certifying compliance or disclosing known or potential violations with a remediation plan and summary. This creates a continuous disclosure obligation — the attestation requirement means developers and deployers must self-report potential violations when they file, not just when asked.
Statutory Text
(a) Every developer and deployer of an automated decision system used in a consequential decision shall comply with the reporting requirements of this section. Regardless of final findings, reports shall be filed with the Attorney General prior to deployment of an automated decision system used in a consequential decision and then annually, or after each substantial change to the system, whichever comes first. (b) Together with each report required to be filed under this section, developers and deployers shall file with the Attorney General a copy of the last completed independent audit required by this subchapter and a legal attestation that the automated decision system used in a consequential decision: (1) does not violate any provision of this subchapter; or (2) may violate or does violate one or more provisions of this article, that there is a plan of remediation to bring the automated decision system into compliance with this subchapter, and a summary of the plan of remediation.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Developer · Automated Decisionmaking
9 V.S.A. § 4193f(c)
Plain Language
Developers must file a comprehensive report with the Attorney General covering nine categories of information: system description (software stack, purpose, intended uses); intended outputs and secondary use potential; detailed training methodology including pre-processing, dataset descriptions, data quality and breadth, and legal compliance steps; data management policies; information for deployer compliance; system capabilities, limitations, safeguards, and guardrail testing; an internal risk assessment covering algorithmic discrimination, validity, reliability, privacy, autonomy, safety, and security; and monitoring recommendations. This is one of the most granular developer-reporting requirements in any state AI bill — it requires disclosure of training data methodology and data gap analysis, not just system-level descriptions.
Statutory Text
(c) Developers of automated decision systems shall file with the Attorney General a report containing the following: (1) a description of the system including: (A) a description of the system's software stack; (B) the purpose of the system and its expected benefits; and (C) the system's current and intended uses, including what consequential decisions it will support and what stakeholders will be impacted; (2) the intended outputs of the system and whether the outputs can be or are otherwise appropriate to be used for any purpose not previously articulated; (3) the methods for training of their models including: (A) any pre-processing steps taken to prepare datasets for the training of a model underlying an automated decision system; (B) descriptions of the datasets upon which models were trained and evaluated, how and why datasets were collected and the sources of those datasets, and how that training data will be used and maintained; (C) the quality and appropriateness of the data used in the automated decision system's design, development, testing, and operation; (D) whether the data contains sufficient breadth to address the range of real-world inputs the automated decision system might encounter and how any data gaps have been addressed; and (E) steps taken to ensure compliance with privacy, data privacy, data security, and copyright laws; (4) use and data management policies; (5) any other information necessary to allow the deployer to understand the outputs and monitor the system for compliance with this subchapter; (6) any other information necessary to allow the deployer to comply with the requirements of subsection (d) of this section; (7) a description of the system's capabilities and any developer-imposed limitations, including capabilities outside of its intended use, when the system should not be used, any safeguards or guardrails in place to protect against unintended, inappropriate, or disallowed uses, and testing of any safeguards or guardrails; (8) an internal risk assessment including documentation and results of testing conducted to identify all reasonably foreseeable risks related to algorithmic discrimination, validity and reliability, privacy and autonomy, and safety and security, as well as actions taken to address those risks, and subsequent testing to assess the efficacy of actions taken to address risks; and (9) whether the system should be monitored and, if so, how the system should be monitored.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated Decisionmaking
9 V.S.A. § 4193f(d)
Plain Language
Deployers must file their own report with the Attorney General covering eight categories: system description (software stack, purpose, intended uses); intended outputs and secondary use potential; monetization plans; whether the system makes or supports consequential decisions; capabilities, limitations, safeguards and guardrail testing; a cost-benefit assessment for consumers; an internal risk assessment covering algorithmic discrimination, accuracy, reliability, privacy, autonomy, safety, and security; and monitoring recommendations. The deployer report differs from the developer report in requiring revenue disclosure and consumer cost-benefit analysis while omitting training data methodology details.
Statutory Text
(d) Deployers of automated decision systems used in consequential decisions shall file with the Attorney General a report containing the following: (1) a description of the system, including: (A) a description of the system's software stack; (B) the purpose of the system and its expected benefits; and (C) the system's current and intended uses, including what consequential decisions it will support and what stakeholders will be impacted; (2) the intended outputs of the system and whether the outputs can be or are otherwise appropriate to be used for any purpose not previously articulated; (3) whether the deployer collects revenue or plans to collect revenue from use of the automated decision system in a consequential decision and, if so, how it monetizes or plans to monetize use of the system; (4) whether the system is designed to make consequential decisions itself or whether and how it supports consequential decisions; (5) a description of the system's capabilities and any deployer-imposed limitations, including capabilities outside of its intended use, when the system should not be used, any safeguards or guardrails in place to protect against unintended, inappropriate, or disallowed uses, and testing of any safeguards or guardrails; (6) an assessment of the relative benefits and costs to the consumer given the system's purpose, capabilities, and probable use cases; (7) an internal risk assessment including documentation and results of testing conducted to identify all reasonably foreseeable risks related to algorithmic discrimination, accuracy and reliability, privacy and autonomy, and safety and security, as well as actions taken to address those risks, and subsequent testing to assess the efficacy of actions taken to address risks; and (8) whether the system should be monitored and, if so, how the system should be monitored.
G-02 Public Transparency & Documentation · G-02.4 · Government · Automated Decisionmaking
9 V.S.A. § 4193f(e)(2)
Plain Language
The Attorney General must maintain a publicly accessible online database containing the filed reports and audit results required by this subchapter, updated biannually. Reports may be redacted under rules the AG adopts to protect sensitive and protected information. While this is primarily an AG obligation, it has compliance implications for developers and deployers because their filed reports and audit results will be public — they should prepare filings with the understanding that they will be disclosed (subject to approved redactions).
Statutory Text
(e) The Attorney General shall: (2) maintain an online database that is accessible to the general public with reports, redacted in accordance with this section, and audits required by this subchapter, which shall be updated biannually.
R-02 Regulatory Disclosure & Submissions · R-02.1 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193f(f)
Plain Language
Systems already deployed for consequential decisions as of July 1, 2025 receive a transitional period: developers and deployers have until January 1, 2027 (18 months after July 1, 2025) to complete and file all required reports and complete the independent audit. New systems deployed after July 1, 2025 must comply with the pre-deployment reporting and audit requirements before deployment. This is a grandfathering provision that gives existing deployments time to come into compliance.
Statutory Text
(f) For automated decision systems already in deployment for use in consequential decisions on or before July 1, 2025, developers and deployers shall not later than 18 months after July 1, 2025 complete and file the reports and complete the independent audit required by this subchapter.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193g(a)-(b)
Plain Language
Every developer and deployer must plan, document, and implement a risk management policy and program governing its automated decision systems. The program must specify the principles, processes, and personnel used to identify, document, and mitigate known or foreseeable risks of algorithmic discrimination. It must be iterative with regular systematic reviews and updates over the system's lifecycle. Reasonableness is assessed considering: NIST AI RMF v1.0 (or a later version if the AG determines it is at least as stringent); the entity's size and complexity; the nature and scope of the system; and the sensitivity and volume of data processed. A single program may cover multiple systems if sufficient. The NIST AI RMF reference provides a benchmark standard, though compliance is evaluated based on reasonableness rather than strict conformity.
Statutory Text
(a) Each developer or deployer of automated decision systems used in consequential decisions shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of the automated decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under section 4193b of this title. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of an automated decision system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this subsection shall be reasonable considering the: (1) guidance and standards set forth in version 1.0 of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology in the U.S. Department of Commerce, or the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology if, in the Attorney General's discretion, the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology in the U.S. Department of Commerce is at least as stringent as version 1.0; (2) size and complexity of the developer or deployer; (3) nature, scope, and intended uses of the automated decision system developed or deployed for use in consequential decisions; and (4) sensitivity and volume of data processed in connection with the automated decision system. (b) A risk management policy and program implemented pursuant to subsection (a) of this section may cover multiple automated decision systems developed by the same developer or deployed by the same deployer for use in consequential decisions if sufficient.
R-02 Regulatory Disclosure & Submissions · R-02.2 · DeveloperDeployer · Automated Decisionmaking
9 V.S.A. § 4193g(c)
Plain Language
The Attorney General may at any time require a developer or deployer to disclose its risk management policy and program in a form and manner the AG prescribes, and may evaluate the program for compliance. This is a demand-driven regulatory disclosure — developers and deployers should maintain their risk management documentation in a form that can be produced upon AG request.
Statutory Text
(c) The Attorney General may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subsection (a) of this section in a form and manner prescribed by the Attorney General. The Attorney General may evaluate the risk management policy and program to ensure compliance with this section.