PS-01
Public Sector AI
Government AI Accountability
Government agencies that develop, procure, or deploy AI systems must maintain inventories of those systems, conduct impact assessments before deploying AI in consequential public-facing roles, meet defined procurement standards, and disclose AI use to affected individuals. Vendors selling AI to government agencies must be able to demonstrate standards compliance.
Applies to DeveloperGovernment Sector Government System
Bills — Enacted
1
unique bills
Bills — Proposed
6
Last Updated
2026-03-29
Core Obligation

Government agencies that develop, procure, or deploy AI systems must maintain inventories of those systems, conduct impact assessments before deploying AI in consequential public-facing roles, meet defined procurement standards, and disclose AI use to affected individuals. Vendors selling AI to government agencies must be able to demonstrate standards compliance.

Sub-Obligations4 sub-obligations
Bills That Map This Requirement 7 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-01-01
PS-01.2
Bus. & Prof. Code § 22756.1(d)
Plain Language
State agencies deploying high-risk automated decision systems must require the developer to provide a copy of the developer's impact assessment. The state agency must keep the impact assessment confidential. This creates a procurement-adjacent obligation: state agencies cannot deploy these systems without first obtaining and retaining the developer's impact assessment, which functions as a pre-deployment documentation requirement for government use of AI.
(d) (1) A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section. (2) Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential.
Pending 2026-01-01
PS-01.4
Pub. Contract Code § 10285.8(a)-(b)
Plain Language
State agencies are prohibited from awarding a contract for a high-risk automated decision system to any vendor that has violated the Unruh Civil Rights Act, the California Fair Employment and Housing Act, or this bill's automated decision system requirements. This functions as a procurement debarment provision — vendors with civil rights or AI compliance violations are ineligible for state AI contracts. The practical compliance burden falls on both the state agency (which must verify vendor compliance) and the vendor (which must maintain a clean compliance record to remain eligible).
(a) A state agency shall not award a contract for a high-risk automated decision system to a person who has violated any of the following: (1) The Unruh Civil Rights Act (Section 51 of the Civil Code). (2) The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code). (3) Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code. (b) As used in this section, "high-risk automated decision system" has the same meaning as defined in Section 22756 of the Business and Professions Code.
Pending 2026-10-01
PS-01.2PS-01.4
Sec. 14(b)-(c)
Plain Language
State agencies face a categorical prohibition on using AI technology for public benefit delivery or any function materially impacting individual rights, civil liberties, safety, or welfare unless specifically authorized by law. Procurement of AI technology is similarly restricted to specifically authorized uses. When procurement is authorized, the agency must obtain a full independent bias audit (meeting the same Section 8 standards as private-sector deployers), submit the audit to the Commissioner of Administrative Services, and post it publicly on the agency's website at least 60 days before deployment. PII may be redacted from the published audit.
(b) (1) No state agency, or any entity acting on behalf of a state agency, shall, directly or indirectly, utilize or apply any artificial intelligence technology in performing any function that (A) is related to the delivery of any public assistance benefit to individuals in the state by such agency, or (B) will have a material impact on the rights, civil liberties, safety or welfare of individuals in the state, unless such utilization or application is specifically authorized by law. (2) No state agency shall authorize any procurement, purchase or acquisition of any artificial intelligence technology, except where the use of such system is specifically authorized by law. (3) If a state agency is authorized to procure, purchase or acquire an artificial intelligence technology, the state agency shall contract with an independent auditor to complete a bias audit pursuant to subsection (a) of section 8 of this act. (c) Any bias audit completed pursuant to subdivision (3) of subsection (b) of this section shall be submitted to the Commissioner of Administrative Services, in a form and manner prescribed by the commissioner, and posted on the agency's Internet web site not later than sixty days prior to deployment of such artificial intelligence technology. Any agency may redact any data in such impact statement to remove personally identifiable information of any individual.
Enacted 2023-07-01
PS-01.1PS-01.3
Section 1(b)(1)-(2)
Plain Language
The Department of Administrative Services must conduct and publicly publish an annual inventory of all AI systems used by any Connecticut state agency. The inventory must cover each system's name, vendor, capabilities, whether it was used to make or support decisions, and whether it underwent a pre-implementation impact assessment. The first inventory was due by December 31, 2023, and must recur annually. Publication must be on the state's open data portal, ensuring machine-readable public access. This is one of the earliest state-level government AI inventory mandates —.
(b) (1) Not later than December 31, 2023, and annually thereafter, the Department of Administrative Services shall conduct an inventory of all systems that employ artificial intelligence and are in use by any state agency. Each such inventory shall include at least the following information for each such system: (A) The name of such system and the vendor, if any, that provided such system; (B) A description of the general capabilities and uses of such system; (C) Whether such system was used to independently make, inform or materially support a conclusion, decision or judgment; and (D) Whether such system underwent an impact assessment prior to implementation. (2) The Department of Administrative Services shall make each inventory conducted pursuant to subdivision (1) of this subsection publicly available on the state's open data portal.
Enacted 2023-07-01
PS-01.4
Section 2(b)(1)-(3)
Plain Language
The Office of Policy and Management must develop, publish, and maintain AI governance policies and procedures covering the full lifecycle of AI systems used by state agencies — development, procurement, implementation, utilization, and ongoing assessment. The policies must at minimum address procurement standards, anti-discrimination and disparate impact protections across a broad set of protected characteristics, pre-implementation impact assessments, and ongoing DAS assessments. The policies must be publicly posted on OPM's website and may be revised at the Secretary's discretion. This effectively creates an AI governance framework for all Connecticut executive branch agencies, with OPM as the standard-setting body and DAS as the compliance assessor.
(b) (1) Not later than February 1, 2024, the Office of Policy and Management shall develop and establish policies and procedures concerning the development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence and are in use by state agencies. Such policies and procedures shall, at a minimum, include policies and procedures that: (A) Govern the procurement, implementation and ongoing assessment of such systems by state agencies; (B) Are sufficient to ensure that no such system (i) results in any unlawful discrimination against any individual or group of individuals, or (ii) has any unlawful disparate impact on any individual or group of individuals on the basis of any actual or perceived differentiating characteristic, including, but not limited to, age, genetic information, color, ethnicity, race, creed, religion, national origin, ancestry, sex, gender identity or expression, sexual orientation, marital status, familial status, pregnancy, veteran status, disability or lawful source of income; (C) Require a state agency to assess the likely impact of any such system before implementing such system; and (D) Provide for the Department of Administrative Services to perform ongoing assessments of such systems to ensure that no such system results in any unlawful discrimination or disparate impact described in subparagraph (B) of this subdivision. (2) The Office of Policy and Management may revise the policies and procedures established pursuant to subdivision (1) of this subsection if the Secretary of the Office of Policy and Management determines, in said secretary's discretion, that such revision is necessary. (3) The Office of Policy and Management shall post the policies and procedures established pursuant to subdivision (1) of this subsection, and any revision made to such policies and procedures pursuant to subdivision (2) of this subsection, on the office's Internet web site.
Enacted 2023-07-01
PS-01.2
Section 2(c)
Plain Language
Beginning February 1, 2024, no Connecticut state agency may deploy a new AI system unless it has first completed a pre-implementation impact assessment confirming the system will not result in unlawful discrimination or disparate impact. The assessment must follow OPM's policies. Separately, even if the assessment is completed, the agency head retains discretionary authority to block implementation if they determine the system would cause unlawful discrimination. This creates a dual gate: both the impact assessment must be satisfied and the agency head must not exercise a discretionary veto. This is a hard deployment prohibition — agencies cannot deploy first and assess later.
(c) Beginning on February 1, 2024, no state agency shall implement any system that employs artificial intelligence (1) unless the state agency has performed an impact assessment, in accordance with the policies and procedures established pursuant to subsection (b) of this section, to ensure that such system will not result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (b) of this section, or (2) if the head of such state agency determines, in such agency head's discretion, that such system will result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (b) of this section.
Enacted 2023-07-01
PS-01.1PS-01.3
Section 3(b)(1)-(2)
Plain Language
The Judicial Department must conduct and publicly publish an annual inventory of its AI systems, mirroring the executive branch inventory requirement in Section 1. The same four data fields are required: system name and vendor, capabilities, decision-making role, and whether a pre-implementation impact assessment was performed. Publication is on the Judicial Department's website rather than the state open data portal. The first inventory was due by December 31, 2023. This parallel structure reflects the constitutional separation of powers — the Judicial Department governs its own AI systems independently of the executive branch framework.
(b) (1) Not later than December 31, 2023, and annually thereafter, the Judicial Department shall conduct an inventory of the department's systems that employ artificial intelligence. Each such inventory shall include at least the following information for each such system: (A) The name of such system and the vendor, if any, that provided such system; (B) A description of the general capabilities and uses of such system; (C) Whether such system was used to independently make, inform or materially support a conclusion, decision or judgment; and (D) Whether such system underwent an impact assessment prior to implementation. (2) The Judicial Department shall make each inventory conducted pursuant to subdivision (1) of this subsection publicly available on the department's Internet web site.
Enacted 2023-07-01
PS-01.4
Section 3(c)(1)-(3)
Plain Language
The Judicial Department must independently develop, publish, and maintain its own AI governance policies and procedures — parallel to but separate from the OPM-developed policies for executive agencies. The minimum requirements are identical: procurement governance, non-discrimination protections, pre-implementation impact assessments, and ongoing assessments. The Chief Court Administrator has discretionary revision authority. Policies must be publicly posted on the department's website. This separation ensures the judiciary controls its own AI governance without executive branch oversight.
(c) (1) Not later than February 1, 2024, the Judicial Department shall develop and establish policies and procedures concerning the department's development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence. Such policies and procedures shall, at a minimum, include policies and procedures that: (A) Govern the department's procurement, implementation and ongoing assessment of such systems; (B) Are sufficient to ensure that no such system (i) results in any unlawful discrimination against any individual or group of individuals, or (ii) has any unlawful disparate impact on any individual or group of individuals on the basis of any actual or perceived differentiating characteristic, including, but not limited to, age, genetic information, color, ethnicity, race, creed, religion, national origin, ancestry, sex, gender identity or expression, sexual orientation, marital status, familial status, pregnancy, veteran status, disability or lawful source of income; (C) Require the department to assess the likely impact of any such system before implementing such system; and (D) Provide for ongoing assessments of such systems to ensure that no such system results in any unlawful discrimination or disparate impact described in subparagraph (B) of this subdivision. (2) The Judicial Department may revise the policies and procedures established pursuant to subdivision (1) of this subsection if the Chief Court Administrator determines, in said administrator's discretion, that such revision is necessary. (3) The Judicial Department shall post the policies and procedures established pursuant to subdivision (1) of this subsection, and any revision made to such policies and procedures pursuant to subdivision (2) of this subsection, on the department's Internet web site.
Enacted 2023-07-01
PS-01.2
Section 3(d)
Plain Language
Beginning February 1, 2024, the Judicial Department faces two obligations: (1) it may not deploy any new AI system without first completing a pre-implementation impact assessment under its own policies, and the Chief Court Administrator retains a discretionary veto if they determine the system would cause discrimination; and (2) it must perform ongoing assessments of all deployed AI systems to ensure they do not result in unlawful discrimination or disparate impact. This combines the pre-deployment gate and ongoing assessment obligations that are separated across Sections 2(c) and 1(c) for executive agencies into a single provision for the judiciary.
(d) Beginning on February 1, 2024, the Judicial Department shall: (1) Not implement any system that employs artificial intelligence (A) unless the department has performed an impact assessment, in accordance with the policies and procedures established pursuant to subsection (c) of this section, to ensure that such system will not result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (c) of this section, or (B) if the Chief Court Administrator determines, in said administrator's discretion, that such system will result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (c) of this section; and (2) Perform ongoing assessments of the department's systems that employ artificial intelligence to ensure that no such system shall result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (c) of this section.
Pending 2026-07-01
PS-01.4
Fla. Stat. § 287.138(3)(b), (7)
Plain Language
Beginning July 1, 2026, Florida governmental entities may not contract with AI technology vendors that are owned by, controlled by, or organized under the laws of a foreign country of concern. Before accepting any bid or entering a contract for AI technology, software, or products, the government entity must obtain a sworn affidavit from the vendor attesting it has no such foreign-country-of-concern ties. This extends Florida's existing foreign-country-of-concern contracting prohibitions to the AI procurement context. Vendors selling AI to Florida government agencies must be prepared to execute the required affidavit.
(3)(b) Beginning July 1, 2026, a governmental entity may not accept a bid on, a proposal for, or a reply to, or enter into a contract with, an entity to provide artificial intelligence technology, software, or products, including as a portion or an option to the products or services provided under the contract, unless the entity provides the governmental entity with an affidavit signed by an officer or a representative of the entity under penalty of perjury attesting that the entity does not meet any of the criteria in paragraph (7)(a), paragraph (7)(b), or paragraph (7)(c). (7) A governmental entity may not knowingly enter into a contract with an entity for artificial intelligence technology, software, or products, including as a portion or an option to the products or services provided under the contract, if: (a) The entity is owned by the government of a foreign country of concern; (b) A government of a foreign country of concern has a controlling interest in the entity; or (c) The entity is organized under the laws of or has its principal place of business in a foreign country of concern.
Failed 2026-07-01
PS-01.4
Fla. Stat. § 287.138(3)(b), (7)
Plain Language
Beginning July 1, 2026, Florida governmental entities may not contract for AI technology, software, or products with entities that are owned by, controlled by, or organized under the laws of a foreign country of concern. Vendors seeking AI contracts must provide a sworn affidavit attesting they do not meet any of the prohibited criteria. Existing contracts with such entities may not be extended or renewed after July 1, 2026. This is a procurement restriction, not a performance standard — the obligation falls on both the governmental entity (not to contract) and the vendor (to attest).
(b) Beginning July 1, 2026, a governmental entity may not accept a bid on, a proposal for, or a reply to, or enter into a contract with, an entity to provide artificial intelligence technology, software, or products, including as a portion or an option to the products or services provided under the contract, unless the entity provides the governmental entity with an affidavit signed by an officer or a representative of the entity under penalty of perjury attesting that the entity does not meet any of the criteria in paragraph (7)(a), paragraph (7)(b), or paragraph (7)(c). (7) A governmental entity may not knowingly enter into a contract with an entity for artificial intelligence technology, software, or products, including as a portion or an option to the products or services provided under the contract, if: (a) The entity is owned by the government of a foreign country of concern; (b) A government of a foreign country of concern has a controlling interest in the entity; or (c) The entity is organized under the laws of or has its principal place of business in a foreign country of concern.
Pre-filed 2025-01-14
PS-01.4
M.G.L. c. 30, § 66
Plain Language
State agencies and their contractors are categorically prohibited from using any automated decision system for three categories of functions — public assistance delivery, functions materially impacting individual rights/safety/welfare, or functions affecting statutory or constitutional rights — unless the specific use is authorized by law. This is an extremely restrictive default-prohibition approach: agencies must affirmatively identify statutory authorization before deploying any ADS in these contexts. The scope is broad enough to cover virtually any consequential government ADS use.
Any agency or department of the commonwealth, or any entity acting on behalf of an agency or department, shall be prohibited from, directly or indirectly, utilizing or applying any automated decision system in performing any function that: (i) is related to the delivery of any public assistance benefit; (ii) will have a material impact on the rights, civil liberties, safety, or welfare of any individual within the commonwealth; or (iii) affects any statutorily or constitutionally provided right of an individual; unless such utilization or application is specifically authorized in law.
Pre-filed 2025-01-14
PS-01.4
M.G.L. c. 30B, § 24(a)
Plain Language
State executive branch entities may not procure, purchase, or acquire any service or system utilizing automated decision systems unless the use is specifically authorized by law. This extends the prohibition on government ADS use from operational deployment (Section 66) to the procurement stage — agencies cannot even acquire ADS tools without statutory authorization, creating a procurement-stage gate in addition to the deployment-stage gate.
a) No executive office, department, division, agency, or commission of the commonwealth shall authorize any procurement, purchase, or acquisition of any service or system utilizing, or relying on, automated decision systems, except where the use of such system is specifically authorized in law.
Pre-filed 2025-01-14
PS-01.2
M.G.L. c. 30B, § 24(b)-(d)
Plain Language
State agencies that have statutory authorization to use ADS must still conduct comprehensive impact assessments before deployment, every two years thereafter, and before any material system change. Assessments must cover six areas: system objectives and effectiveness, algorithms and training data, testing for accuracy/fairness/bias/discrimination/cybersecurity/safety/misuse, personal data use, and individual notification mechanisms. If an assessment finds discriminatory or biased outcomes, the agency must immediately cease all use of the system and any information it produced. Assessments must be submitted to the Governor, Senate President, and House Speaker at least 60 days before implementation and published on the agency's website (with limited redaction authority for safety, privacy, or IT security concerns, accompanied by an explanatory statement).
b) No state agency shall utilize or apply any automated decision system unless the agency, or an entity acting on behalf of such state agency, shall have conducted an impact assessment for the application and use of such automated decision system. Following the first impact assessment, an impact assessment shall be conducted at least once every two years. An impact assessment shall be conducted prior to any material change to the automated decision-making system that may change the outcome or effect of such system. Such impact assessments shall include: i) a description of the objectives of the automated decision system; ii) an evaluation of the ability of the automated decision system to achieve its stated objectives; iii) a description and evaluation of the objectives and development of the automated decision system including: 1) A summary of the underlying algorithms, computational modes, and artificial intelligence tools that are used within the automated decision system; and 2) The design and training data used to develop the automated decision-making process. iv) testing for: 1) Accuracy, fairness, bias, and discrimination, and an assessment of whether the use of the automated decision-making system produces discriminatory results on the basis of a consumer's or a class of consumers' actual or perceived race, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, source of income, or disability and outlines mitigations for any identified performance differences in outcomes across relevant groups impacted by such use; 2) Any cybersecurity vulnerabilities and privacy risks resulting from the deployment and use of the automated decision-making system, and the development or existence of safeguards to mitigate the risks; 3) Any public health or safety risks resulting from the deployment and use of the automated decision-making system; 4) Any reasonably foreseeable misuse of the automated decision-making system and the development or existence of safeguards against such misuse; v) the extent to which the deployment and use of the automated decision-making system requires the input of sensitive and personal data, how that data is used and stored, and any control users may have over their data; and vi) the notification mechanism or procedure, if any, by which individuals impacted by the utilization of the automated decision-making system may be notified of the use of such automated decision-making system and of the individual's personal data, and informed of their rights and options relating to such use. c) Notwithstanding the provisions of this section or any other law, if an impact assessment finds that the automated decision-making system produces discriminatory or biased outcomes, the state agency shall cease any utilization, application, or function of such automated decision-making system, and of any information produced using that system. d) Any impact assessment conducted pursuant to this section shall be submitted to the governor, the president of the senate, and the speaker of the house at least 60 days prior to the implementation of the automated decision-making system that is the subject of such assessment. The impact statement of an automated decision-making system that is approved and utilized, shall be published on the website of the relevant agency. If the state agency makes a determination that the disclosure of any information required in the impact assessment would result in a substantial negative impact on health or safety of the public, infringe upon the privacy rights of individuals, or significantly impact the state agency's ability to protect its information technology, it may redact such information, provided that an explanatory statement on the process by which the state agency made such determination is published along with the redacted impact assessment.
Pending 2026-07-01
Va. Code § 19.2-11.14(C)
Plain Language
Every time a law-enforcement officer uses covered artificial intelligence during a criminal investigation, the use must be documented in the official police report. The disclosure must identify the AI tool by name and description, and explain how it was used — specifically, whether it generated an investigative lead or aided in identifying a suspect, witness, or victim. Upon arrest or issuance of a summons, the report must be provided to both the prosecutor (attorney for the Commonwealth) and defense counsel (or the individual directly if unrepresented). If covered AI is used after an arrest, disclosure to the prosecutor and the individual must occur within 30 calendar days. Administrative AI tools (e.g., spell-check, document management) are excluded from the definition of covered AI.
C. Any use of covered artificial intelligence in a criminal investigation by a law-enforcement officer shall be disclosed in the official police report filed for such investigation. Upon arrest or issuance of a summons following a criminal investigation, the official police report shall be submitted to the attorney for the Commonwealth and provided to counsel for the individual under investigation or directly to the individual under investigation if not represented by counsel. Any use of covered artificial intelligence by the law-enforcement agency in a criminal investigation subsequent to arrest shall be disclosed to the attorney for the Commonwealth and the individual under investigation as soon as practicable but no later than 30 calendar days following such use.

Disclosure of the use of covered artificial intelligence in the official police report shall include:

1. The name and a description of the covered artificial intelligence; and

2. A brief description of the covered artificial intelligence's role in the investigation, including whether it was used to generate an investigative lead or identify or aid in the identification of a suspect, witness, or victim.