SB-1964
TX · State · USA
TX
USA
● Passed
Proposed Effective Date
2025-09-01
Texas S.B. No. 1964 — An Act relating to the regulation and use of artificial intelligence systems and the management of data by governmental entities
Texas SB 1964 regulates the procurement, development, deployment, and use of AI systems by state agencies and local governments. It creates two tiers of AI systems: general AI systems and 'heightened scrutiny' AI systems (those intended to autonomously make or be a controlling factor in consequential decisions affecting access to government services). The law requires DIR to establish an AI code of ethics aligned with the NIST AI RMF, develop minimum risk management standards for heightened scrutiny systems, and create an AI sandbox program. State agencies must disclose public-facing AI use, conduct impact assessments for heightened scrutiny systems, maintain AI inventories, and post standardized notices. Enforcement is through the attorney general via injunction, with a complaint web page for the public and a 30-day vendor cure period before contract voiding.
Summary

Texas SB 1964 regulates the procurement, development, deployment, and use of AI systems by state agencies and local governments. It creates two tiers of AI systems: general AI systems and 'heightened scrutiny' AI systems (those intended to autonomously make or be a controlling factor in consequential decisions affecting access to government services). The law requires DIR to establish an AI code of ethics aligned with the NIST AI RMF, develop minimum risk management standards for heightened scrutiny systems, and create an AI sandbox program. State agencies must disclose public-facing AI use, conduct impact assessments for heightened scrutiny systems, maintain AI inventories, and post standardized notices. Enforcement is through the attorney general via injunction, with a complaint web page for the public and a 30-day vendor cure period before contract voiding.

Enforcement & Penalties
Enforcement Authority
The Texas Department of Information Resources (DIR) has primary administrative and regulatory authority, including rulemaking, standards development, and receiving impact assessments. The attorney general has enforcement authority, complaint-driven via a web page established under § 2054.710 and via reports of violations from state agencies or vendors under § 2054.709. The attorney general reviews complaints and violation reports and may bring an action to enjoin violations. For vendors, a 30-day written notice and cure period applies before the state agency may issue a notice of intent to void the contract, followed by an additional 30-day cure period. Vendors with more than one voided contract may be referred to the comptroller for debarment from state contracts. No private right of action is created.
Penalties
The statute does not specify monetary damages, civil penalties, or statutory minimums. The attorney general may seek injunctive relief to enjoin violations. For vendors, the remedy is contract voiding by the state agency and potential debarment from future state contracts by the comptroller. No monetary penalties or private damages are provided.
Who Is Covered
What Is Covered
"Artificial intelligence system" means a machine-based system that for explicit or implicit objectives infers from provided information a method to generate outputs, such as predictions, content, recommendations, or decisions, to influence a physical or virtual environment with varying levels of autonomy and adaptiveness after deployment.
"Heightened scrutiny artificial intelligence system" means an artificial intelligence system specifically intended to autonomously make, or be a controlling factor in making, a consequential decision. The term does not include an artificial intelligence system intended to: (A) perform a narrow procedural task; (B) improve the result of a previously completed human activity; (C) perform a preparatory task to an assessment relevant to a consequential decision; or (D) detect decision-making patterns or deviations from previous decision-making patterns.
Compliance Obligations 16 obligations · click obligation ID to open requirement page
PS-01 Government AI Accountability · PS-01.1 · Government · Government System
Gov't Code § 2054.068(b)(2)
Plain Language
DIR must collect from each state agency an inventory of all AI systems — including heightened scrutiny AI systems — as part of its broader IT infrastructure data collection. This is an ongoing reporting obligation from agencies to DIR, covering servers, mainframes, cloud services, AI systems, and vendor information. Agencies should be prepared to enumerate all AI systems in use when DIR requests this information.
Statutory Text
(b) The department shall collect from each state agency information on the status and condition of the agency's information technology infrastructure, including information regarding: (1) the agency's information security program; (2) an inventory of the agency's servers, mainframes, cloud services, artificial intelligence systems, including heightened scrutiny artificial intelligence systems, and other information technology equipment; (3) identification of vendors that operate and manage the agency's information technology infrastructure; and (4) any additional related information requested by the department.
PS-01 Government AI Accountability · PS-01.1 · Government · Government System
Gov't Code § 2054.0965(b)(6)-(7)
Plain Language
As part of the periodic information resources review required under § 2054.0965, each state agency must include an inventory of all AI systems and heightened scrutiny AI systems it has deployed, along with an evaluation of each system's purpose, risk mitigation measures, and strategic alignment. The agency must also confirm compliance with all applicable AI statutes, rules, standards, the code of ethics, and the minimum standards for heightened scrutiny systems. This goes beyond a simple inventory — it requires both a substantive evaluation of each system and an affirmative compliance certification.
Statutory Text
(6) an inventory and identification of the artificial intelligence systems and heightened scrutiny artificial intelligence systems deployed by the agency, including an evaluation of the purpose of and risk mitigation measures for each system and an analysis of each system's support of the agency's strategic plan under this subchapter; and (7) confirmation by the agency of compliance with state statutes, rules, and standards relating to information resources and artificial intelligence systems, including the artificial intelligence system code of ethics developed under Section 2054.702, and minimum standards developed under Section 2054.703.
PS-01 Government AI Accountability · PS-01.1 · Government · Government System
Gov't Code § 2054.0965(c)
Plain Language
Local governments must conduct a review of their deployment and use of heightened scrutiny AI systems and provide the review to DIR upon request. Unlike state agencies, local governments are not required to include this in the broader periodic information resources review — the obligation is limited to heightened scrutiny systems and triggered on request. Local governments should have a completed review available to produce when DIR asks for it.
Statutory Text
(c) Local governments shall complete a review of the deployment and use of heightened scrutiny artificial intelligence systems and, on request, provide the review to the department in the manner the department prescribes.
G-01 AI Governance Program & Documentation · G-01.1 · Government · Government System
Gov't Code § 2054.702(a)-(c)
Plain Language
DIR must establish by rule an AI code of ethics aligned with NIST AI RMF 1.0, covering human oversight, fairness, transparency, data privacy, redress/accountability, and evaluation frequency. All state agencies and local governments that procure, develop, deploy, or use AI systems must adopt this code. This is a mandatory adoption requirement — agencies cannot opt out or develop their own competing framework. The NIST AI RMF 1.0 alignment provides a recognized safe harbor framework for the code's substance.
Statutory Text
Sec. 2054.702. ARTIFICIAL INTELLIGENCE SYSTEM CODE OF ETHICS. (a) The department by rule shall establish an artificial intelligence system code of ethics for use by state agencies and local governments that procure, develop, deploy, or use artificial intelligence systems. (b) At a minimum, the artificial intelligence system code of ethics must include guidance for the deployment and use of artificial intelligence systems and heightened scrutiny artificial intelligence systems that aligns with the Artificial Intelligence Risk Management Framework (AI RMF 1.0) published by the National Institute of Standards and Technology. The guidance must address: (1) human oversight and control; (2) fairness and accuracy; (3) transparency, including consumer disclosures; (4) data privacy and security; (5) public and internal redress, including accountability and liability; and (6) the frequency of evaluations and documentation of improvements. (c) State agencies and local governments shall adopt the code of ethics developed under this section.
G-01 AI Governance Program & Documentation · G-01.1 · Government · Government System
Gov't Code § 2054.703(a)-(c)
Plain Language
DIR must develop minimum risk management and governance standards — consistent with NIST AI RMF 1.0 — specifically for heightened scrutiny AI systems used by state agencies and local governments. These standards must require accountability reports, pre-deployment assessments covering security risks, performance metrics, and transparency, and re-assessments upon material changes to the system, data, or intended use. Standards must also address vendor risk management through contractual requirements, employee training, and acceptable use policies. All state agencies and local governments must adopt these standards. Pre-deployment testing is carved out from the definition of unlawful harm, creating a safe harbor for good-faith compliance testing.
Statutory Text
Sec. 2054.703. MINIMUM STANDARDS FOR HEIGHTENED SCRUTINY ARTIFICIAL INTELLIGENCE SYSTEMS. (a) The department by rule shall develop minimum risk management and governance standards for the development, procurement, deployment, and use of heightened scrutiny artificial intelligence systems by a state agency or local government. (b) The minimum standards must be consistent with the Artificial Intelligence Risk Management Framework (AI RMF 1.0) published by the National Institute of Standards and Technology and must: (1) establish accountability measures, such as required reports describing the use of, limitations of, and safeguards for the heightened scrutiny artificial intelligence system; (2) require the assessment and documentation of the heightened scrutiny artificial intelligence system's known security risks, performance metrics, and transparency measures: (A) before deploying the system; and (B) at the time any material change is made to: (i) the system; (ii) the state or local data used by the system; or (iii) the intended use of the system; (3) provide to local governments resources that advise on managing, procuring, and deploying a heightened scrutiny artificial intelligence system, including data protection measures and employee training; and (4) establish guidelines for: (A) risk management frameworks, acceptable use policies, and training employees; and (B) mitigating the risk of unlawful harm by contractually requiring vendors to implement risk management frameworks when deploying heightened scrutiny artificial intelligence systems on behalf of state agencies or local governments. (c) State agencies and local governments shall adopt the standards developed under Subsection (a).
PS-01 Government AI Accountability · PS-01.4 · Government · Government System
Gov't Code § 2054.703(b)(4)(B)
Plain Language
The minimum standards must include guidelines requiring state agencies and local governments to contractually obligate their vendors to implement risk management frameworks when those vendors deploy heightened scrutiny AI systems on government's behalf. This is a procurement-side obligation — the agency must include risk management requirements in vendor contracts. Vendors selling heightened scrutiny AI systems to government must be prepared to demonstrate compliance with risk management frameworks as a contractual term.
Statutory Text
(4) establish guidelines for: (A) risk management frameworks, acceptable use policies, and training employees; and (B) mitigating the risk of unlawful harm by contractually requiring vendors to implement risk management frameworks when deploying heightened scrutiny artificial intelligence systems on behalf of state agencies or local governments.
T-01 AI Identity Disclosure · T-01.1 · Government · Government System
Gov't Code § 2054.707
Plain Language
State agencies using public-facing AI systems must clearly disclose to the public that they are interacting with an AI system, following the format prescribed by the AI code of ethics. This is a conditional obligation — no disclosure is required if a reasonable person would already know they are interacting with AI. The specific form and manner of disclosure will depend on the code of ethics developed by DIR under § 2054.702. This is analogous to AI identity disclosure laws in other jurisdictions but applies only to government-deployed systems.
Statutory Text
Sec. 2054.707. DISCLOSURE REQUIREMENTS. A state agency that procures, develops, deploys, or uses a public-facing artificial intelligence system shall provide clear disclosure of interaction with the system to the public as provided by the artificial intelligence system code of ethics established under Section 2054.702. The disclosure is not required if a reasonable person would know the person is interacting with an artificial intelligence system.
PS-01 Government AI Accountability · PS-01.2 · Government · Government System
Gov't Code § 2054.708(a)-(d)
Plain Language
State agencies and their contracted vendors must conduct an impact assessment for each heightened scrutiny AI system covering risks of unlawful harm (discriminatory consequential decisions against protected-class members), system limitations, and information governance practices. The assessment must be available to DIR on request. Critically, these assessments are confidential and exempt from public records disclosure under Texas's Public Information Act (Chapter 552) — agencies can redact or withhold without requesting an attorney general opinion. DIR must implement security protections for submitted assessments. This confidentiality carve-out is noteworthy because it differs from jurisdictions that require public disclosure of impact assessments.
Statutory Text
Sec. 2054.708. IMPACT ASSESSMENTS. (a) A state agency that deploys or uses a heightened scrutiny artificial intelligence system or a vendor that contracts with a state agency for the deployment or use of a heightened scrutiny artificial intelligence system shall conduct a system assessment that outlines: (1) risks of unlawful harm; (2) system limitations; and (3) information governance practices. (b) The state agency or vendor shall make a copy of the assessment available to the department on request. (c) An impact assessment conducted under this section is confidential and not subject to disclosure under Chapter 552. The state agency or department may redact or withhold information as confidential under Chapter 552 without requesting a decision from the attorney general under Subchapter G, Chapter 552. (d) The department shall take actions necessary to ensure the confidentiality of information submitted under this section, including restricting access to submitted information to only authorized personnel and implementing physical, electronic, and procedural protections.
Other · Government · Government System
Gov't Code § 2054.709(a)
Plain Language
When a state agency or vendor becomes aware that it has violated any provision of Subchapter S (the AI regulation subchapter), it must self-report the violation to DIR (if applicable) and the attorney general. This is a mandatory self-reporting obligation — it is not triggered by a safety incident involving a user, but by any violation of the subchapter's requirements (e.g., failure to conduct an impact assessment, failure to disclose AI use, failure to adopt the code of ethics).
Statutory Text
Sec. 2054.709. ENFORCEMENT. (a) If a state agency or vendor becomes aware of a violation of this subchapter, the agency or vendor shall report the violation to the department, if applicable, and the attorney general.
Other · Government System
Gov't Code § 2054.709(b)-(f)
Plain Language
This provision establishes the enforcement mechanism for Subchapter S. The attorney general reviews violation reports and complaints and may seek injunctions. For vendors specifically, there is a two-stage cure process: first a 30-day written notice of violation, then (if uncured) a 30-day notice of intent to void the contract. Only after both cure periods expire may the state agency void the contract. Vendors with multiple voided contracts face potential debarment from all state contracts via comptroller action. This creates no new compliance obligation — it describes the consequences of non-compliance with obligations established elsewhere in the subchapter.
Statutory Text
(b) The attorney general shall: (1) review a report submitted under this section or a complaint reported through the web page established under Section 2054.710; and (2) determine whether to bring an action to enjoin a violation of this subchapter. (c) If the attorney general, in consultation with the department, determines that a vendor violated this subchapter, the attorney general shall provide the vendor with a written notice of the violation. (d) If a vendor fails to respond or cure the violation before the 31st day after the date the vendor receives the written notice under Subsection (c), the state agency shall provide the vendor with a notice of intent to void the contract. The vendor may respond and seek to cure the violation before the 31st day after the date the vendor receives the notice of intent. (e) If the vendor fails to cure the violation before the 31st day after the date the vendor receives the notice of intent to void the contract under Subsection (d), the state agency may void the contract without further obligation to the vendor. (f) If the department determines that a vendor has had more than one contract voided under Subsection (e), the department shall refer the matter to the comptroller. Using procedures prescribed by Section 2155.077, the comptroller may bar the vendor from participating in a state agency contract.
Other · Government System
Gov't Code § 2054.710(a)-(f)
Plain Language
The attorney general must establish a public web page for individuals to report complaints about AI systems — including allegations of unlawful infringement on constitutional rights or financial livelihood and unlawful harm from AI use. Complaints are shared with DIR, and complainants may request an explanation. The attorney general must also post educational content about AI risks, benefits, and consumer rights. Biennially, the AG must report to the legislature on complaints received and enforcement actions taken. This creates obligations on the attorney general's office, not on entities deploying AI — it is part of the enforcement infrastructure.
Statutory Text
Sec. 2054.710. ARTIFICIAL INTELLIGENCE SYSTEM COMPLAINT WEB PAGE. (a) The attorney general shall, in collaboration with the department, establish a web page on the attorney general's Internet website that allows a person to report a complaint relating to artificial intelligence systems, including: (1) instances of an artificial intelligence system allegedly unlawfully infringing on the person's constitutional rights or financial livelihood; or (2) the use of an artificial intelligence system that allegedly results in unlawful harm. (b) A complaint submitted on the web page created under Subsection (a) must be distributed to the department. (c) A person who submits a complaint on the web page created under Subsection (a) may request an explanation from the department. (d) The attorney general shall post on the attorney general's Internet website information that: (1) educates persons regarding the risks and benefits of artificial intelligence systems; and (2) explains a person's rights in relation to artificial intelligence systems. (e) If the attorney general, in consultation with the department, determines that the complaint is substantiated and a violation of this subchapter occurred, the attorney general may seek enforcement under Section 2054.709. (f) Not later than November 30 of each even-numbered year, the attorney general shall submit to the legislature a report summarizing the complaints received under this section, the resolutions of the complaints, and any enforcement actions taken.
T-01 AI Identity Disclosure · T-01.1 · Government · Government System
Gov't Code § 2054.711(a)-(c)
Plain Language
State agencies and local governments must post a standardized notice on all applications, websites, and public computer systems associated with any AI system that is either public-facing or a controlling factor in a consequential decision. DIR will develop the required form, which must describe the system, its data sources, and privacy/ethics compliance measures. This is broader than § 2054.707's disclosure requirement because it covers both public-facing AI and AI that is a controlling factor in consequential decisions (even if not public-facing). Healthcare facilities have a lighter compliance path — they may satisfy this requirement by including a generalized AI disclosure in patient consent forms rather than posting the full standardized notice.
Statutory Text
Sec. 2054.711. STANDARDIZED NOTICE. (a) Each state agency and local government deploying or using an artificial intelligence system that is public-facing or that is a controlling factor in a consequential decision shall include a standardized notice on all related applications, Internet websites, and public computer systems. (b) The department shall develop a form that agencies must use for the notice required under Subsection (a). The form must include: (1) general information about the system and data sources the system uses; and (2) measures taken to maintain compliance with information privacy laws and ethics standards. (c) For the purposes of this section, any health care service by an academic medical center, state owned hospital, public hospital or hospital district organized under Article IX of the Texas Constitution or under Texas Health and Safety Code may satisfy their disclosure requirements by including a generalized statement in the patient consent forms that an artificial intelligence system may be used in the course of their treatment.
G-01 AI Governance Program & Documentation · G-01.6 · Government · Government System
Gov't Code § 2054.137(a-1), (c)
Plain Language
Small state agencies (150 or fewer full-time employees) may either designate a full-time employee as a data management officer or share a data management officer with other agencies, subject to DIR approval. The data management officer must annually post at least three high-value data sets on the Texas Open Data Portal, excluding confidential information. While this provision is primarily about data management rather than AI governance specifically, the data management officer role intersects with AI governance because AI systems rely on government data — and the broader bill context (Subchapter S) makes this role relevant to AI data governance.
Statutory Text
(a-1) A state agency with 150 or fewer full-time employees may: (1) designate a full-time employee of the agency to serve as a data management officer; or (2) enter into an agreement with one or more state agencies to jointly employ a data management officer if approved by the department. (c) In accordance with department guidelines, the data management officer for a state agency shall annually post on the Texas Open Data Portal established by the department under Section 2054.070 at least three high-value data sets as defined by Section 2054.1265. The high-value data sets may not include information that is confidential or protected from disclosure under state or federal law.
Other · Government System
Gov't Code § 2054.704(a)-(d)
Plain Language
DIR must develop and publish AI educational materials covering responsible use, risks, benefits, consumer rights, and risk mitigation for both government employees and the general public. DIR must also host statewide AI best practices forums and training sessions for government employees. This is a government infrastructure obligation on DIR itself — it does not create a compliance obligation on agencies deploying AI, though agencies may be expected to utilize the resulting materials and training.
Statutory Text
Sec. 2054.704. EDUCATIONAL OUTREACH PROGRAM. (a) The department shall develop educational materials on artificial intelligence systems to promote the responsible use of the systems and awareness of the risks and benefits of system use, explain consumer rights in relation to the systems, and describe risk mitigation techniques. (b) The department shall develop training materials for state and local government employees and the general public. The training materials must be made available on the department's public Internet website. (c) The department shall host statewide forums and training sessions on artificial intelligence systems best practices for state and local government employees. (d) The department may: (1) use money appropriated to the department to produce materials required by this section; and (2) contract with a vendor to produce those materials.
Other · Government System
Gov't Code § 2054.705(a)-(f)
Plain Language
This provision creates an eight-member public sector AI advisory board — six state agency representatives and two public technology experts, all governor-appointed — to advise on AI use cases, facilitate resource sharing, consult with DIR, identify implementation opportunities, and recommend deregulation. Members serve two-year terms without compensation. This is an institutional structure provision that creates no compliance obligation for entities deploying AI systems.
Statutory Text
Sec. 2054.705. PUBLIC SECTOR ARTIFICIAL INTELLIGENCE SYSTEMS ADVISORY BOARD. (a) A public sector artificial intelligence systems advisory board is established to assist state agencies in the development, deployment, and use of artificial intelligence systems. (b) The advisory board shall: (1) obtain and disseminate information on artificial intelligence systems, including use cases, policies, and guidelines; (2) facilitate shared resources between state agencies; (3) consult with the department on artificial intelligence systems issues; (4) identify opportunities: (A) for state agencies to implement artificial intelligence systems to reduce administrative burdens; and (B) to streamline the state procurement process for artificial intelligence systems; and (5) recommend elimination of rules that restrict the innovation of artificial intelligence systems. (c) The department shall provide administrative support for the advisory board. (d) The advisory board is composed of eight members as follows: (1) six members representing state agencies, including one member representing an agency with fewer than 150 employees, appointed by the governor or the governor's designee; and (2) two public members with expertise in technology, appointed by the governor or the governor's designee. (e) Advisory board members serve two-year terms. Advisory board members may be reappointed. (f) Advisory board members are not entitled to compensation or reimbursement of expenses for service on the advisory board.
Other · Government System
Gov't Code § 2054.706(a)-(i)
Plain Language
DIR must establish an AI sandbox program allowing state agency eligible entities and registered vendors to test AI systems in a controlled environment with relaxed regulatory compliance. Vendors must apply by submitting a system description, risk assessment, and mitigation plan. Participants must submit quarterly performance and risk reports to DIR. DIR must report biennially to the legislature on program participation and outcomes. The sandbox provides a regulatory relief mechanism for pre-deployment AI testing. While the sandbox includes reporting obligations for participating vendors and entities, these are program-participation conditions rather than standalone compliance obligations under the broader AI regulation framework.
Statutory Text
Sec. 2054.706. ARTIFICIAL INTELLIGENCE SYSTEM SANDBOX PROGRAM. (a) In this section: (1) "Eligible entity" means an eligible customer under Section 2054.0525. (2) "Program" means the program established by this section that is designed to allow temporary testing of an artificial intelligence system in a controlled, limited manner without requiring full compliance with otherwise applicable regulations. (3) "Vendor" means a person registered with the department as a contractor to provide commodity items under Section 2157.068. (b) The department shall establish and administer a program to support eligible entities in contracting with vendors to engage in research, development, training, testing, and other pre-deployment activities related to artificial intelligence systems to effectively, efficiently, and securely assist the entity in accomplishing its public purposes. (c) The department shall create an application process for vendors to apply to participate in the program. The application process must include: (1) a detailed description of the artificial intelligence system proposed for participation in the program and the system's intended use; (2) a risk assessment of the system that addresses potential impacts on the public; and (3) a plan for mitigating any adverse consequences discovered during the system's testing phase. (d) A vendor participating in the program shall, with oversight by the department, provide eligible entities with secure access to an artificial intelligence system used in the program. (e) The department shall provide to vendors and eligible entities participating in the program detailed guidelines regarding the exemption from compliance with otherwise applicable regulations provided by the program. (f) The eligible entities and vendors shall submit quarterly reports to the department that include: (1) performance measures for the artificial intelligence system; (2) risk mitigation strategies implemented during system testing; (3) feedback on program effectiveness and efficiency; and (4) any additional information the department requests. (g) Not later than November 30 of each even-numbered year, the department shall produce an annual report and submit the report to the legislature summarizing: (1) the number of eligible entities and vendors participating in the program and the program outcomes; and (2) recommendations for legislative or other action. (h) Notwithstanding Section 2054.383, the department may operate the program as a statewide technology center under Subchapter L. (i) The department shall share information and resources for the program with any other department program established to allow a person, without holding a license or certificate of registration under the laws of this state, to test an artificial intelligence system for a limited time and on a limited basis.