PS-01
Public Sector AI
Government AI Accountability
Government agencies that develop, procure, or deploy AI systems must maintain inventories of those systems, conduct impact assessments before deploying AI in consequential public-facing roles, meet defined procurement standards, and disclose AI use to affected individuals. Vendors selling AI to government agencies must be able to demonstrate standards compliance.
Applies to DeveloperGovernment Sector Government System
Bills — Enacted
1
unique bills
Bills — Proposed
9
Last Updated
2026-03-29
Core Obligation

Government agencies that develop, procure, or deploy AI systems must maintain inventories of those systems, conduct impact assessments before deploying AI in consequential public-facing roles, meet defined procurement standards, and disclose AI use to affected individuals. Vendors selling AI to government agencies must be able to demonstrate standards compliance.

Sub-Obligations4 sub-obligations
Bills That Map This Requirement 10 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-01-01
PS-01.4
Bus. & Prof. Code § 22756.1(d)(1)-(2)
Plain Language
State agencies deploying a high-risk automated decision system must require the developer to provide a copy of the impact assessment. The state agency must keep the impact assessment confidential. This creates a procurement-adjacent obligation: state agencies cannot simply deploy a high-risk system without first obtaining and retaining the developer's impact assessment. The confidentiality protection is absolute — it overrides other California disclosure laws.
(d) (1) A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section. (2) Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential.
Pending 2026-01-01
PS-01.4
Pub. Contract Code § 10285.8(a)-(b)
Plain Language
State agencies are prohibited from awarding contracts for high-risk automated decision systems to any person who has violated the Unruh Civil Rights Act, the California Fair Employment and Housing Act, or this chapter (Chapter 24.6 of the Business and Professions Code). This creates a procurement disqualification tied to civil rights and AI compliance history — vendors with prior violations of these laws are ineligible. The statute does not specify a lookback period or rehabilitation process.
(a) A state agency shall not award a contract for a high-risk automated decision system to a person who has violated any of the following: (1) The Unruh Civil Rights Act (Section 51 of the Civil Code). (2) The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code). (3) Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code. (b) As used in this section, "high-risk automated decision system" has the same meaning as defined in Section 22756 of the Business and Professions Code.
Pending 2026-10-01
PS-01.2PS-01.4
Sec. 14(b)-(c)
Plain Language
State agencies are broadly prohibited from using AI technology in any function related to public assistance benefits or that materially impacts rights, civil liberties, safety, or welfare — unless specifically authorized by law. Even when authorized, agencies may not procure AI technology unless the use is specifically authorized by law, and must contract with an independent auditor for a bias audit meeting the same requirements as the private-sector bias audit in Section 8. The completed bias audit must be submitted to the Commissioner of Administrative Services and posted on the agency's website at least 60 days before deployment. Personally identifiable information may be redacted.
(b) (1) No state agency, or any entity acting on behalf of a state agency, shall, directly or indirectly, utilize or apply any artificial intelligence technology in performing any function that (A) is related to the delivery of any public assistance benefit to individuals in the state by such agency, or (B) will have a material impact on the rights, civil liberties, safety or welfare of individuals in the state, unless such utilization or application is specifically authorized by law. (2) No state agency shall authorize any procurement, purchase or acquisition of any artificial intelligence technology, except where the use of such system is specifically authorized by law. (3) If a state agency is authorized to procure, purchase or acquire an artificial intelligence technology, the state agency shall contract with an independent auditor to complete a bias audit pursuant to subsection (a) of section 8 of this act. (c) Any bias audit completed pursuant to subdivision (3) of subsection (b) of this section shall be submitted to the Commissioner of Administrative Services, in a form and manner prescribed by the commissioner, and posted on the agency's Internet web site not later than sixty days prior to deployment of such artificial intelligence technology. Any agency may redact any data in such impact statement to remove personally identifiable information of any individual.
Enacted 2023-07-01
PS-01.1PS-01.3
Section 1(b)(1)-(2)
Plain Language
The Department of Administrative Services must conduct and publicly publish an annual inventory of all AI systems used by any Connecticut state agency. The inventory must cover each system's name, vendor, capabilities, whether it was used to make or support decisions, and whether it underwent a pre-implementation impact assessment. The first inventory was due by December 31, 2023, and must recur annually. Publication must be on the state's open data portal, ensuring machine-readable public access. This is one of the earliest state-level government AI inventory mandates —.
(b) (1) Not later than December 31, 2023, and annually thereafter, the Department of Administrative Services shall conduct an inventory of all systems that employ artificial intelligence and are in use by any state agency. Each such inventory shall include at least the following information for each such system: (A) The name of such system and the vendor, if any, that provided such system; (B) A description of the general capabilities and uses of such system; (C) Whether such system was used to independently make, inform or materially support a conclusion, decision or judgment; and (D) Whether such system underwent an impact assessment prior to implementation. (2) The Department of Administrative Services shall make each inventory conducted pursuant to subdivision (1) of this subsection publicly available on the state's open data portal.
Enacted 2023-07-01
PS-01.4
Section 2(b)(1)-(3)
Plain Language
The Office of Policy and Management must develop, publish, and maintain AI governance policies and procedures covering the full lifecycle of AI systems used by state agencies — development, procurement, implementation, utilization, and ongoing assessment. The policies must at minimum address procurement standards, anti-discrimination and disparate impact protections across a broad set of protected characteristics, pre-implementation impact assessments, and ongoing DAS assessments. The policies must be publicly posted on OPM's website and may be revised at the Secretary's discretion. This effectively creates an AI governance framework for all Connecticut executive branch agencies, with OPM as the standard-setting body and DAS as the compliance assessor.
(b) (1) Not later than February 1, 2024, the Office of Policy and Management shall develop and establish policies and procedures concerning the development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence and are in use by state agencies. Such policies and procedures shall, at a minimum, include policies and procedures that: (A) Govern the procurement, implementation and ongoing assessment of such systems by state agencies; (B) Are sufficient to ensure that no such system (i) results in any unlawful discrimination against any individual or group of individuals, or (ii) has any unlawful disparate impact on any individual or group of individuals on the basis of any actual or perceived differentiating characteristic, including, but not limited to, age, genetic information, color, ethnicity, race, creed, religion, national origin, ancestry, sex, gender identity or expression, sexual orientation, marital status, familial status, pregnancy, veteran status, disability or lawful source of income; (C) Require a state agency to assess the likely impact of any such system before implementing such system; and (D) Provide for the Department of Administrative Services to perform ongoing assessments of such systems to ensure that no such system results in any unlawful discrimination or disparate impact described in subparagraph (B) of this subdivision. (2) The Office of Policy and Management may revise the policies and procedures established pursuant to subdivision (1) of this subsection if the Secretary of the Office of Policy and Management determines, in said secretary's discretion, that such revision is necessary. (3) The Office of Policy and Management shall post the policies and procedures established pursuant to subdivision (1) of this subsection, and any revision made to such policies and procedures pursuant to subdivision (2) of this subsection, on the office's Internet web site.
Enacted 2023-07-01
PS-01.2
Section 2(c)
Plain Language
Beginning February 1, 2024, no Connecticut state agency may deploy a new AI system unless it has first completed a pre-implementation impact assessment confirming the system will not result in unlawful discrimination or disparate impact. The assessment must follow OPM's policies. Separately, even if the assessment is completed, the agency head retains discretionary authority to block implementation if they determine the system would cause unlawful discrimination. This creates a dual gate: both the impact assessment must be satisfied and the agency head must not exercise a discretionary veto. This is a hard deployment prohibition — agencies cannot deploy first and assess later.
(c) Beginning on February 1, 2024, no state agency shall implement any system that employs artificial intelligence (1) unless the state agency has performed an impact assessment, in accordance with the policies and procedures established pursuant to subsection (b) of this section, to ensure that such system will not result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (b) of this section, or (2) if the head of such state agency determines, in such agency head's discretion, that such system will result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (b) of this section.
Enacted 2023-07-01
PS-01.1PS-01.3
Section 3(b)(1)-(2)
Plain Language
The Judicial Department must conduct and publicly publish an annual inventory of its AI systems, mirroring the executive branch inventory requirement in Section 1. The same four data fields are required: system name and vendor, capabilities, decision-making role, and whether a pre-implementation impact assessment was performed. Publication is on the Judicial Department's website rather than the state open data portal. The first inventory was due by December 31, 2023. This parallel structure reflects the constitutional separation of powers — the Judicial Department governs its own AI systems independently of the executive branch framework.
(b) (1) Not later than December 31, 2023, and annually thereafter, the Judicial Department shall conduct an inventory of the department's systems that employ artificial intelligence. Each such inventory shall include at least the following information for each such system: (A) The name of such system and the vendor, if any, that provided such system; (B) A description of the general capabilities and uses of such system; (C) Whether such system was used to independently make, inform or materially support a conclusion, decision or judgment; and (D) Whether such system underwent an impact assessment prior to implementation. (2) The Judicial Department shall make each inventory conducted pursuant to subdivision (1) of this subsection publicly available on the department's Internet web site.
Enacted 2023-07-01
PS-01.4
Section 3(c)(1)-(3)
Plain Language
The Judicial Department must independently develop, publish, and maintain its own AI governance policies and procedures — parallel to but separate from the OPM-developed policies for executive agencies. The minimum requirements are identical: procurement governance, non-discrimination protections, pre-implementation impact assessments, and ongoing assessments. The Chief Court Administrator has discretionary revision authority. Policies must be publicly posted on the department's website. This separation ensures the judiciary controls its own AI governance without executive branch oversight.
(c) (1) Not later than February 1, 2024, the Judicial Department shall develop and establish policies and procedures concerning the department's development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence. Such policies and procedures shall, at a minimum, include policies and procedures that: (A) Govern the department's procurement, implementation and ongoing assessment of such systems; (B) Are sufficient to ensure that no such system (i) results in any unlawful discrimination against any individual or group of individuals, or (ii) has any unlawful disparate impact on any individual or group of individuals on the basis of any actual or perceived differentiating characteristic, including, but not limited to, age, genetic information, color, ethnicity, race, creed, religion, national origin, ancestry, sex, gender identity or expression, sexual orientation, marital status, familial status, pregnancy, veteran status, disability or lawful source of income; (C) Require the department to assess the likely impact of any such system before implementing such system; and (D) Provide for ongoing assessments of such systems to ensure that no such system results in any unlawful discrimination or disparate impact described in subparagraph (B) of this subdivision. (2) The Judicial Department may revise the policies and procedures established pursuant to subdivision (1) of this subsection if the Chief Court Administrator determines, in said administrator's discretion, that such revision is necessary. (3) The Judicial Department shall post the policies and procedures established pursuant to subdivision (1) of this subsection, and any revision made to such policies and procedures pursuant to subdivision (2) of this subsection, on the department's Internet web site.
Enacted 2023-07-01
PS-01.2
Section 3(d)
Plain Language
Beginning February 1, 2024, the Judicial Department faces two obligations: (1) it may not deploy any new AI system without first completing a pre-implementation impact assessment under its own policies, and the Chief Court Administrator retains a discretionary veto if they determine the system would cause discrimination; and (2) it must perform ongoing assessments of all deployed AI systems to ensure they do not result in unlawful discrimination or disparate impact. This combines the pre-deployment gate and ongoing assessment obligations that are separated across Sections 2(c) and 1(c) for executive agencies into a single provision for the judiciary.
(d) Beginning on February 1, 2024, the Judicial Department shall: (1) Not implement any system that employs artificial intelligence (A) unless the department has performed an impact assessment, in accordance with the policies and procedures established pursuant to subsection (c) of this section, to ensure that such system will not result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (c) of this section, or (B) if the Chief Court Administrator determines, in said administrator's discretion, that such system will result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (c) of this section; and (2) Perform ongoing assessments of the department's systems that employ artificial intelligence to ensure that no such system shall result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (c) of this section.
Failed 2026-07-01
PS-01.4
Fla. Stat. § 287.138(3)(b), (7)
Plain Language
Beginning July 1, 2026, Florida governmental entities may not contract with any entity for AI technology, software, or products — including when AI is a portion or option of a broader contract — if the entity is owned by, has controlling interest from, or is organized under the laws of or headquartered in a foreign country of concern. Vendors must provide a sworn affidavit attesting they do not meet any of these criteria as a condition of bid or proposal acceptance. Existing contracts with such entities may not be extended or renewed after July 1, 2026.
(3)(b) Beginning July 1, 2026, a governmental entity may not accept a bid on, a proposal for, or a reply to, or enter into a contract with, an entity to provide artificial intelligence technology, software, or products, including as a portion or an option to the products or services provided under the contract, unless the entity provides the governmental entity with an affidavit signed by an officer or a representative of the entity under penalty of perjury attesting that the entity does not meet any of the criteria in paragraph (7)(a), paragraph (7)(b), or paragraph (7)(c). (7) A governmental entity may not knowingly enter into a contract with an entity for artificial intelligence technology, software, or products, including as a portion or an option to the products or services provided under the contract, if: (a) The entity is owned by the government of a foreign country of concern; (b) A government of a foreign country of concern has a controlling interest in the entity; or (c) The entity is organized under the laws of or has its principal place of business in a foreign country of concern.
Failed 2026-07-01
PS-01.4
Fla. Stat. § 287.138(3)(b), (7)
Plain Language
Beginning July 1, 2026, Florida governmental entities may not contract for AI technology, software, or products with entities that are owned by, controlled by, or organized under the laws of a foreign country of concern. Vendors seeking AI contracts must provide a sworn affidavit attesting they do not meet any of the prohibited criteria. Existing contracts with such entities may not be extended or renewed after July 1, 2026. This is a procurement restriction, not a performance standard — the obligation falls on both the governmental entity (not to contract) and the vendor (to attest).
(b) Beginning July 1, 2026, a governmental entity may not accept a bid on, a proposal for, or a reply to, or enter into a contract with, an entity to provide artificial intelligence technology, software, or products, including as a portion or an option to the products or services provided under the contract, unless the entity provides the governmental entity with an affidavit signed by an officer or a representative of the entity under penalty of perjury attesting that the entity does not meet any of the criteria in paragraph (7)(a), paragraph (7)(b), or paragraph (7)(c). (7) A governmental entity may not knowingly enter into a contract with an entity for artificial intelligence technology, software, or products, including as a portion or an option to the products or services provided under the contract, if: (a) The entity is owned by the government of a foreign country of concern; (b) A government of a foreign country of concern has a controlling interest in the entity; or (c) The entity is organized under the laws of or has its principal place of business in a foreign country of concern.
Passed 2025-03-13
PS-01.1
Section 3(1)(d)-(e)
Plain Language
The AI Governance Committee must maintain a centralized registry inventorying all generative AI and high-risk AI systems used by state government. It must also develop an approval process that records applications, use cases, and risk-mitigation rationales for each AI system. This functions as both an inventory and a pre-deployment approval gate for state agency AI use.
(d) Maintaining a centralized registry to include current inventory of generative artificial intelligence systems and high-risk artificial intelligence systems; and (e) Developing an approval process to include a registry of application, use case, and decision rationale aimed at mitigation of risks.
Passed 2025-03-13
PS-01.2
Section 3(5)(a)-(e)
Plain Language
Before a state agency AI system is approved, the executive director of the Commonwealth Office of Technology must consider and formally document at least five factors: non-discrimination, citizen benefit, required level of human oversight, risk assessment with mitigation strategies (covering cybersecurity, privacy, health, and safety), and data control and quality. This functions as a pre-deployment impact assessment requirement for government AI systems.
(5) At a minimum, the executive director of the Commonwealth Office of Technology shall consider and document: (a) How the artificial intelligence system will not result in unlawful discrimination against any individual or group of individuals; (b) How the use of generative artificial intelligence or other artificial intelligence capabilities will benefit the citizens of the Commonwealth and serve the objectives of the department or agency; (c) To what extent oversight and human interaction of the artificial intelligence system should be required; (d) The potential risks, including cybersecurity, data protection and privacy, and health and safety of individuals and businesses, and a mitigation strategy to any identified or potential risk; and (e) The proper control and management for all data possessed by the Commonwealth to maintain security and data quality.
Passed 2025-03-13
PS-01.4
Section 3(4)
Plain Language
All state departments, agencies, and administrative bodies are subject to mandatory review of their generative AI and high-risk AI systems by the Commonwealth Office of Technology. This creates a centralized audit and oversight authority over all state agency AI deployments, functioning as an internal procurement and compliance review requirement.
(4) To maintain and secure the technology infrastructure, information technology, information resources, and personal information, all departments, agencies, and administrative bodies shall be subject to review of generative artificial intelligence systems or high-risk artificial intelligence systems.
Pending 2025-01-14
Ch. 30 § 66
Plain Language
Massachusetts state agencies and entities acting on their behalf are categorically prohibited from using automated decision systems for any function related to public assistance benefits, materially affecting individual rights/civil liberties/safety/welfare, or affecting statutory or constitutional rights — unless the specific use is affirmatively authorized by law. This is a default-prohibition regime: government ADS use in consequential contexts requires specific legislative authorization, reversing the typical regulatory approach where use is permitted unless prohibited.
Any agency or department of the commonwealth, or any entity acting on behalf of an agency or department, shall be prohibited from, directly or indirectly, utilizing or applying any automated decision system in performing any function that: (i) is related to the delivery of any public assistance benefit; (ii) will have a material impact on the rights, civil liberties, safety, or welfare of any individual within the commonwealth; or (iii) affects any statutorily or constitutionally provided right of an individual; unless such utilization or application is specifically authorized in law.
Pending 2025-01-14
PS-01.4
Ch. 30B § 24(a)
Plain Language
Massachusetts state entities are prohibited from procuring, purchasing, or acquiring any service or system relying on automated decision systems unless the use is specifically authorized by law. This extends the default-prohibition regime from § 66 into the procurement process — agencies cannot even acquire ADS technology without specific legislative authorization, creating a pre-procurement gate.
a) No executive office, department, division, agency, or commission of the commonwealth shall authorize any procurement, purchase, or acquisition of any service or system utilizing, or relying on, automated decision systems, except where the use of such system is specifically authorized in law. An automated decision system is any computational process, automated system, or algorithm utilizing machine learning, statistical modeling, data analytics, artificial intelligence, or similar methods that issues an output, including a score, classification, ranking, or recommendation, that is used to assist or replace human decision making on decisions that impact natural persons.
Pending 2025-01-14
PS-01.2
Ch. 30B § 24(b)
Plain Language
State agencies may not use any ADS without first conducting a comprehensive impact assessment, with biennial reassessments and reassessment before any material change. The assessment must cover: system objectives and whether it achieves them; technical description of algorithms and training data; testing for accuracy, fairness, bias, and discrimination across a broad list of protected classes with mitigations for identified disparities; cybersecurity and privacy risks with safeguards; public health/safety risks; reasonably foreseeable misuse with safeguards; sensitive data requirements and user data controls; and notification mechanisms for affected individuals. This is a comprehensive government-sector impact assessment requirement.
b) No state agency shall utilize or apply any automated decision system unless the agency, or an entity acting on behalf of such state agency, shall have conducted an impact assessment for the application and use of such automated decision system. Following the first impact assessment, an impact assessment shall be conducted at least once every two years. An impact assessment shall be conducted prior to any material change to the automated decision-making system that may change the outcome or effect of such system. Such impact assessments shall include: i) a description of the objectives of the automated decision system; ii) an evaluation of the ability of the automated decision system to achieve its stated objectives; iii) a description and evaluation of the objectives and development of the automated decision system including: 1) A summary of the underlying algorithms, computational modes, and artificial intelligence tools that are used within the automated decision system; and 2) The design and training data used to develop the automated decision-making process. iv) testing for: 1) Accuracy, fairness, bias, and discrimination, and an assessment of whether the use of the automated decision-making system produces discriminatory results on the basis of a consumer's or a class of consumers' actual or perceived race, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, source of income, or disability and outlines mitigations for any identified performance differences in outcomes across relevant groups impacted by such use; 2) Any cybersecurity vulnerabilities and privacy risks resulting from the deployment and use of the automated decision-making system, and the development or existence of safeguards to mitigate the risks; 3) Any public health or safety risks resulting from the deployment and use of the automated decision-making system; 4) Any reasonably foreseeable misuse of the automated decision-making system and the development or existence of safeguards against such misuse; v) the extent to which the deployment and use of the automated decision-making system requires the input of sensitive and personal data, how that data is used and stored, and any control users may have over their data; and vi) the notification mechanism or procedure, if any, by which individuals impacted by the utilization of the automated decision-making system may be notified of the use of such automated decision-making system and of the individual's personal data, and informed of their rights and options relating to such use.
Pending 2025-01-14
PS-01.2PS-01.3
Ch. 30B § 24(c)-(d)
Plain Language
If an impact assessment finds the ADS produces discriminatory or biased outcomes, the agency must immediately cease all use of the system and all information it produced — a mandatory stop-use provision with no remediation path (unlike the private-sector provision which allows remediation). Impact assessments must be submitted to the governor, senate president, and house speaker at least 60 days before implementation. Approved assessments must be published on the relevant agency's website. Agencies may redact information that would harm public safety, privacy, or IT security, but must publish an explanatory statement about the redaction determination alongside the redacted assessment.
c) Notwithstanding the provisions of this section or any other law, if an impact assessment finds that the automated decision-making system produces discriminatory or biased outcomes, the state agency shall cease any utilization, application, or function of such automated decision-making system, and of any information produced using that system. d) Any impact assessment conducted pursuant to this section shall be submitted to the governor, the president of the senate, and the speaker of the house at least 60 days prior to the implementation of the automated decision-making system that is the subject of such assessment. The impact statement of an automated decision-making system that is approved and utilized, shall be published on the website of the relevant agency. If the state agency makes a determination that the disclosure of any information required in the impact assessment would result in a substantial negative impact on health or safety of the public, infringe upon the privacy rights of individuals, or significantly impact the state agency's ability to protect its information technology, it may redact such information, provided that an explanatory statement on the process by which the state agency made such determination is published along with the redacted impact assessment.
Pending 2025-01-01
PS-01.4
State Technology Law § 402(2)
Plain Language
State agencies may not procure, purchase, or acquire any AI-powered service or system for use in public assistance, rights-affecting, or welfare-impacting functions unless the system supports continued and operational meaningful human review. This creates a procurement gate: vendors selling automated decision-making systems to New York state agencies must ensure their systems are architecturally capable of supporting ongoing human oversight, and agencies must verify this capability before acquisition.
No state agency shall authorize any procurement, purchase or acquisition of any service or system utilizing, or relying on, automated decision-making systems in performing any function that is: (a) related to the delivery of any public assistance benefit; (b) will have a material impact on the rights, civil liberties, safety or welfare of any individual within the state; or (c) affects any statutorily or constitutionally provided right of an individual unless such automated decision-making system is subject to continued and operational meaningful human review.
Pending 2025-01-01
PS-01.2
State Technology Law § 403(1)(a)-(f)
Plain Language
Before deploying any automated decision-making system, state agencies must conduct a comprehensive impact assessment signed by the individual(s) responsible for meaningful human review. The assessment must cover system objectives, effectiveness evaluation, technical description (algorithms, training data), bias and discrimination testing across an extensive list of protected characteristics, cybersecurity and privacy risks, public health and safety risks, foreseeable misuse, data handling practices, and notification mechanisms for affected individuals. After the initial assessment, agencies must conduct reassessments at least every two years and before any material change that could alter the system's outcomes. This is among the most detailed government AI impact assessment requirements in U.S. state legislation.
State agencies seeking to utilize or apply an automated decision-making system permitted under section four hundred two of this article with continued and operational meaningful human review shall conduct or have conducted an impact assessment substantially completed and bearing the signature of one or more individuals responsible for meaningful human review for the lawful application and use of such automated decision-making system. Following the first impact assessment, an impact assessment shall be conducted in accordance with this section at least once every two years. An impact assessment shall be conducted prior to any material change to the automated decision-making system that may change the outcome or effect of such system. Such impact assessments shall include: (a) a description of the objectives of the automated decision-making system; (b) an evaluation of the ability of the automated decision-making system to achieve its stated objectives; (c) a description and evaluation of the objectives and development of the automated decision-making including: (i) a summary of the underlying algorithms, computational modes, and artificial intelligence tools that are used within the automated decision-making system; and (ii) the design and training data used to develop the automated decision-making system process; (d) testing for: (i) accuracy, fairness, bias and discrimination, and an assessment of whether the use of the automated decision-making system produces discriminatory results on the basis of a consumer's or a class of consumers' actual or perceived race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability and outlines mitigations for any identified performance differences in outcomes across relevant groups impacted by such use; (ii) any cybersecurity vulnerabilities and privacy risks resulting from the deployment and use of the automated decision-making system, and the development or existence of safeguards to mitigate the risks; (iii) any public health or safety risks resulting from the deployment and use of the automated decision-making system; (iv) any reasonably foreseeable misuse of the automated decision-making system and the development or existence of safeguards against such misuse; (e) the extent to which the deployment and use of the automated decision-making system requires input of sensitive and personal data, how that data is used and stored, and any control users may have over their data; and (f) the notification mechanism or procedure, if any, by which individuals impacted by the utilization of the automated decision-making system may be notified of the use of such automated decision-making system and of the individual's personal data, and informed of their rights and options relating to such use.
Pending 2025-01-01
PS-01.3
State Technology Law § 404(2)(a)-(c)
Plain Language
Agencies must publish each impact assessment on their website, making it publicly accessible. Two narrow redaction exceptions exist: (1) information whose disclosure would substantially harm public health or safety, infringe individual privacy, or significantly impair IT or operational security; and (2) information about systems used for security, fraud detection, identity theft prevention, or law enforcement functions. In both cases, the agency must publish the redacted assessment along with an explanatory statement describing the process by which it determined redaction was warranted — the redaction authority is not a blanket exemption from publication.
(a) The impact assessment of an automated decision-making system shall be published on the website of the relevant state agency. (b) If the state agency makes a determination that the disclosure of any information required in the impact assessment would result in a substantial negative impact on health or safety of the public, infringe upon the privacy rights of individuals, or significantly impair the state agency's ability to protect its information technology or operational assets, such state agency may redact such information, provided that an explanatory statement on the process by which the state agency made such determination is published along with the redacted impact assessment. (c) If the impact assessment covers any automated decision-making system that includes technology that is used to prevent, detect, protect against or respond to security incidents, identity theft, fraud, harassment, malicious or deceptive activities or other illegal activity, preserve the integrity or security of systems, or to investigate, report or prosecute those responsible for any such malicious or deceptive action, such state agency may redact such information for the purposes of this subdivision, provided that an explanatory statement on the process by which the state agency made such determination is published along with the redacted impact assessment.
Pending 2025-01-01
PS-01.1
§ 3(a)-(f)
Plain Language
Within one year of the act's effective date, every state agency currently using an automated decision-making system must submit a disclosure to the legislature cataloguing that system. The disclosure must include a system description, vendor list, start date, purpose summary (including what human decision-making it supports or replaces), whether impact assessments were conducted and their results, and any other relevant information. This is a retroactive inventory requirement for existing systems — it applies to systems already deployed, not just future ones, and serves as a baseline for legislative oversight. This provision takes effect immediately upon enactment, while the substantive Article IV requirements take effect one year later.
Any state agency, that directly or indirectly, utilizes an automated decision-making system, as defined in section 401 of the state technology law, shall submit to the legislature a disclosure on the use of such system, no later than one year after the effective date of this section. Such disclosure shall include: (a) a description of the automated decision-making system utilized by such agency; (b) a list of any software vendors related to such automated decision-making system; (c) the date that the use of such system began; (d) a summary of the purpose and use of such system, including a description of human decision-making and discretion supported or replaced by the automated decision-making system; (e) whether any impact assessments for the automated decision-making system were conducted and the dates and summaries of the results of such assessments where applicable; and (f) any other information deemed relevant by the agency.
Passed 2025-09-01
PS-01.1
Gov't Code § 2054.068(b)(2)
Plain Language
DIR must collect from each state agency an inventory of all AI systems — including heightened scrutiny AI systems — as part of its broader IT infrastructure data collection. This is an ongoing reporting obligation from agencies to DIR, covering servers, mainframes, cloud services, AI systems, and vendor information. Agencies should be prepared to enumerate all AI systems in use when DIR requests this information.
(b) The department shall collect from each state agency information on the status and condition of the agency's information technology infrastructure, including information regarding: (1) the agency's information security program; (2) an inventory of the agency's servers, mainframes, cloud services, artificial intelligence systems, including heightened scrutiny artificial intelligence systems, and other information technology equipment; (3) identification of vendors that operate and manage the agency's information technology infrastructure; and (4) any additional related information requested by the department.
Passed 2025-09-01
PS-01.1
Gov't Code § 2054.0965(b)(6)-(7)
Plain Language
As part of the periodic information resources review required under § 2054.0965, each state agency must include an inventory of all AI systems and heightened scrutiny AI systems it has deployed, along with an evaluation of each system's purpose, risk mitigation measures, and strategic alignment. The agency must also confirm compliance with all applicable AI statutes, rules, standards, the code of ethics, and the minimum standards for heightened scrutiny systems. This goes beyond a simple inventory — it requires both a substantive evaluation of each system and an affirmative compliance certification.
(6) an inventory and identification of the artificial intelligence systems and heightened scrutiny artificial intelligence systems deployed by the agency, including an evaluation of the purpose of and risk mitigation measures for each system and an analysis of each system's support of the agency's strategic plan under this subchapter; and (7) confirmation by the agency of compliance with state statutes, rules, and standards relating to information resources and artificial intelligence systems, including the artificial intelligence system code of ethics developed under Section 2054.702, and minimum standards developed under Section 2054.703.
Passed 2025-09-01
PS-01.1
Gov't Code § 2054.0965(c)
Plain Language
Local governments must conduct a review of their deployment and use of heightened scrutiny AI systems and provide the review to DIR upon request. Unlike state agencies, local governments are not required to include this in the broader periodic information resources review — the obligation is limited to heightened scrutiny systems and triggered on request. Local governments should have a completed review available to produce when DIR asks for it.
(c) Local governments shall complete a review of the deployment and use of heightened scrutiny artificial intelligence systems and, on request, provide the review to the department in the manner the department prescribes.
Passed 2025-09-01
PS-01.4
Gov't Code § 2054.703(b)(4)(B)
Plain Language
The minimum standards must include guidelines requiring state agencies and local governments to contractually obligate their vendors to implement risk management frameworks when those vendors deploy heightened scrutiny AI systems on government's behalf. This is a procurement-side obligation — the agency must include risk management requirements in vendor contracts. Vendors selling heightened scrutiny AI systems to government must be prepared to demonstrate compliance with risk management frameworks as a contractual term.
(4) establish guidelines for: (A) risk management frameworks, acceptable use policies, and training employees; and (B) mitigating the risk of unlawful harm by contractually requiring vendors to implement risk management frameworks when deploying heightened scrutiny artificial intelligence systems on behalf of state agencies or local governments.
Passed 2025-09-01
PS-01.2
Gov't Code § 2054.708(a)-(d)
Plain Language
State agencies and their contracted vendors must conduct an impact assessment for each heightened scrutiny AI system covering risks of unlawful harm (discriminatory consequential decisions against protected-class members), system limitations, and information governance practices. The assessment must be available to DIR on request. Critically, these assessments are confidential and exempt from public records disclosure under Texas's Public Information Act (Chapter 552) — agencies can redact or withhold without requesting an attorney general opinion. DIR must implement security protections for submitted assessments. This confidentiality carve-out is noteworthy because it differs from jurisdictions that require public disclosure of impact assessments.
Sec. 2054.708. IMPACT ASSESSMENTS. (a) A state agency that deploys or uses a heightened scrutiny artificial intelligence system or a vendor that contracts with a state agency for the deployment or use of a heightened scrutiny artificial intelligence system shall conduct a system assessment that outlines: (1) risks of unlawful harm; (2) system limitations; and (3) information governance practices. (b) The state agency or vendor shall make a copy of the assessment available to the department on request. (c) An impact assessment conducted under this section is confidential and not subject to disclosure under Chapter 552. The state agency or department may redact or withhold information as confidential under Chapter 552 without requesting a decision from the attorney general under Subchapter G, Chapter 552. (d) The department shall take actions necessary to ensure the confidentiality of information submitted under this section, including restricting access to submitted information to only authorized personnel and implementing physical, electronic, and procedural protections.
Passed 2022-07-01
PS-01.1
3 V.S.A. § 3305(b)(1)-(7)
Plain Language
The Agency of Digital Services must conduct a comprehensive inventory of every automated decision system being developed, used, or procured by Vermont State government. For each system, the inventory must document the system's name and vendor, general capabilities (including foreseeable out-of-scope capabilities and whether the system makes independent decisions affecting residents), data inputs and outputs, bias testing status, purpose, data security and sharing plans, and fiscal impacts. This is a detailed government AI registry requirement — it applies only to State government systems, not private-sector AI.
(b) Inventory. The Agency of Digital Services shall conduct a review and make an inventory of all automated decision systems that are being developed, employed, or procured by State government. The inventory shall include the following for each automated decision system: (1) the automated decision system's name and vendor; (2) a description of the automated decision system's general capabilities, including: (A) reasonably foreseeable capabilities outside the scope of the agency's proposed use; and (B) whether the automated decision system is used or may be used for independent decision-making powers and the impact of those decisions on Vermont residents; (3) the type or types of data inputs that the technology uses; how that data is generated, collected, and processed; and the type or types of data the automated decision system is reasonably likely to generate; (4) whether the automated decision system has been tested by an independent third party, has a known bias, or is untested for bias; (5) a description of the purpose and proposed use of the automated decision system, including: (A) what decision or decisions it will be used to make or support; (B) whether it is an automated final decision system or automated support decision system; and (C) its intended benefits, including any data or research relevant to the outcome of those results; (6) how automated decision system data is securely stored and processed and whether an agency intends to share access to the automated decision system or the data from that automated decision system with any other entity, and why; and (7) a description of the IT fiscal impacts of the automated decision system, including: (A) initial acquisition costs and ongoing operating costs, such as maintenance, licensing, personnel, legal compliance, use auditing, data retention, and security costs; (B) any cost savings that would be achieved through the use of the technology; and (C) any current or potential sources of funding, including any subsidies or free products being offered by vendors or governmental entities.