SB-4
KY · State · USA
KY
USA
● Passed
Proposed Effective Date
2025-03-13
Kentucky SB 4 — An Act Relating to Protection of Information and Declaring an Emergency (25 RS SB 4/EN)
Kentucky SB 4 has two major components. First, it establishes an AI governance framework for state government by creating an Artificial Intelligence Governance Committee within the Commonwealth Office of Technology, requiring a centralized registry of generative AI and high-risk AI systems used by state agencies, mandating risk management policies adhering to ISO/IEC 42001, requiring public disclosure when AI is used in decisions affecting citizens, and imposing data privacy and security protections. Second, it creates a private right of action for candidates whose appearance, action, or speech is altered through synthetic media in electioneering communications, requiring clear and conspicuous disclosure of synthetic media use, with an affirmative defense available if such disclosure is already included. The Act takes effect immediately as emergency legislation. The first annual report to the legislature is due by December 1, 2025.
Summary

Kentucky SB 4 has two major components. First, it establishes an AI governance framework for state government by creating an Artificial Intelligence Governance Committee within the Commonwealth Office of Technology, requiring a centralized registry of generative AI and high-risk AI systems used by state agencies, mandating risk management policies adhering to ISO/IEC 42001, requiring public disclosure when AI is used in decisions affecting citizens, and imposing data privacy and security protections. Second, it creates a private right of action for candidates whose appearance, action, or speech is altered through synthetic media in electioneering communications, requiring clear and conspicuous disclosure of synthetic media use, with an affirmative defense available if such disclosure is already included. The Act takes effect immediately as emergency legislation. The first annual report to the legislature is due by December 1, 2025.

Enforcement & Penalties
Enforcement Authority
For the government AI governance provisions (Sections 1–3), the Commonwealth Office of Technology and its Artificial Intelligence Governance Committee are responsible for establishing, publishing, and enforcing policy standards and procedures for state agencies; no private right of action is created. For the synthetic media in elections provisions (Section 5), enforcement is by private right of action brought by any candidate whose appearance, action, or speech is altered through synthetic media in an electioneering communication. The candidate must file in Circuit Court and bear the burden of proving synthetic media use by clear and convincing evidence. Media disseminators and advertising sales representatives are generally shielded from liability unless they intentionally remove disclosures or alter content to create synthetic media. Interactive computer services are exempt under 47 U.S.C. § 230 except for liability under subsection (3) for failure to comply with a court order.
Penalties
For synthetic media in elections claims, courts may award injunctive or other equitable relief and reasonable attorney's fees and costs to a prevailing party. The statute does not limit or preclude a plaintiff from securing or recovering any other available remedy. Failure to comply with a court order requiring disclosure is subject to penalties set forth in KRS 121.990(3) for violation of KRS 121.190(1). No statutory minimum damages amount is specified in the Act itself. No damages or penalties are specified for the government AI governance provisions.
Who Is Covered
"Deployer" means any state department, state agency, or state administrative body in the Commonwealth that puts into use a high-risk artificial intelligence system;.
"Developer" means any department, agency, or administrative body that develops or intentionally and substantially modifies a high-risk artificial intelligence system that is offered, purchased, sold, leased, given, or otherwise provided to citizens and businesses in the Commonwealth;.
What Is Covered
"Artificial intelligence system": (a) Means any machine-based computing system that, for any explicit or implicit objective, infers from the inputs such system receives how to generate outputs, including but not limited to content, decisions, predictions, or recommendations, that can influence physical or virtual environments; and (b) Does not include an artificial intelligence system that is used for development, prototyping, and research activities before such artificial intelligence system;
"High-risk artificial intelligence system": (a) Means any artificial intelligence system that is a substantial factor in the decision-making process or specifically intended to autonomously make, or be a substantial factor in making, a consequential decision; and (b) Does not include a system or service intended to perform a narrow procedural task, improve the result of a completed human activity, or detect decision-making patterns or deviations from previous decision-making patterns and is not meant to replace or influence human assessment without human review, or perform a preparatory task in an assessment relevant to a consequential decision;
"Generative artificial intelligence system" means any artificial intelligence system or service that incorporates generative artificial intelligence;
Compliance Obligations 19 obligations · click obligation ID to open requirement page
G-01 AI Governance Program & Documentation · G-01.1 · Government · Government System
Section 3(1)(a)-(b)
Plain Language
The Commonwealth Office of Technology must create an AI Governance Committee responsible for developing policy standards and guiding principles aligned with ISO/IEC 42001 to mitigate risks and protect citizen data and privacy. The Committee must also establish technology standards for how state agencies use generative AI and high-risk AI systems. This is an internal government governance program establishment obligation — it applies only to state agencies, not private-sector entities.
Statutory Text
(1) The Commonwealth Office of Technology shall create an Artificial Intelligence Governance Committee to govern the use of artificial intelligence systems by state departments, state agencies, and state administrative bodies by: (a) Developing policy standards and guiding principles to mitigate risks and protect data and privacy of Kentucky citizens and businesses that adhere to the latest version of Standard ISO/IEC 42001 of the International Organization for Standardization; (b) Establishing technology standards to provide protocols and requirements for the use of generative artificial intelligence and high-risk artificial intelligence systems;
PS-01 Government AI Accountability · PS-01.1 · Government · Government System
Section 3(1)(d)-(e)
Plain Language
The AI Governance Committee must maintain a centralized registry inventorying all generative AI and high-risk AI systems used by state government. It must also develop an approval process that records applications, use cases, and risk-mitigation rationales for each AI system. This functions as both an inventory and a pre-deployment approval gate for state agency AI use.
Statutory Text
(d) Maintaining a centralized registry to include current inventory of generative artificial intelligence systems and high-risk artificial intelligence systems; and (e) Developing an approval process to include a registry of application, use case, and decision rationale aimed at mitigation of risks.
G-01 AI Governance Program & Documentation · G-01.3 · Government · Government System
Section 3(2)(a)-(b)
Plain Language
State agencies must verify their use and development of generative AI and high-risk AI systems and follow responsible, ethical, and transparent procedures. Specifically, all AI models must have comprehensive documentation available for review; human review and intervention must be required based on use case and risk level; and AI systems must be resilient, accountable, and explainable. This creates both a documentation obligation and a human oversight requirement for state agency AI deployments.
Statutory Text
(2) The Artificial Intelligence Governance Committee shall develop policies and procedures to ensure that any department, program, cabinet, agency, or administrative body that utilizes and accesses the Commonwealth's information technology and technology infrastructure shall: (a) Verify the use and development of generative artificial intelligence systems and high-risk artificial intelligence systems; and (b) Act in compliance with responsible, ethical, and transparent procedures to implement the use of artificial intelligence technologies by: 1. Ensuring artificial intelligence models have comprehensive and complete documentation that is available for review and inspection; 2. Requiring review and intervention by humans dependent on the use case and potential risk for all outcomes from generative and high-risk artificial intelligence systems; and 3. Ensuring the use of generative artificial intelligence and high-risk artificial intelligence systems are resilient, accountable, and explainable.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Government · Government System
Section 3(3)(a)-(c)
Plain Language
The Commonwealth Office of Technology must ensure that all state agencies limit AI system data use to what is necessary, prohibit unrestricted access to Commonwealth-controlled personal data, secure all data, and implement data retention timeframes. This is a data minimization, access control, and retention obligation applied to all state agency AI systems.
Statutory Text
(3) The Commonwealth Office of Technology shall prioritize personal privacy and the protection of the data of individuals and businesses as the state develops, implements, employs, and procures artificial intelligence systems, generative artificial intelligence systems, and high-risk artificial intelligence systems by ensuring all departments, agencies, and administrative bodies: (a) Allow only the use of necessary data in artificial intelligence systems; (b) Do not allow unrestricted access to personal data controlled by the Commonwealth; and (c) Secure all data and implement a timeframe for data retention.
PS-01 Government AI Accountability · PS-01.2 · Government · Government System
Section 3(5)(a)-(e)
Plain Language
Before a state agency AI system is approved, the executive director of the Commonwealth Office of Technology must consider and formally document at least five factors: non-discrimination, citizen benefit, required level of human oversight, risk assessment with mitigation strategies (covering cybersecurity, privacy, health, and safety), and data control and quality. This functions as a pre-deployment impact assessment requirement for government AI systems.
Statutory Text
(5) At a minimum, the executive director of the Commonwealth Office of Technology shall consider and document: (a) How the artificial intelligence system will not result in unlawful discrimination against any individual or group of individuals; (b) How the use of generative artificial intelligence or other artificial intelligence capabilities will benefit the citizens of the Commonwealth and serve the objectives of the department or agency; (c) To what extent oversight and human interaction of the artificial intelligence system should be required; (d) The potential risks, including cybersecurity, data protection and privacy, and health and safety of individuals and businesses, and a mitigation strategy to any identified or potential risk; and (e) The proper control and management for all data possessed by the Commonwealth to maintain security and data quality.
T-01 AI Identity Disclosure · T-01.1 · Government · Government System
Section 3(6)(a)
Plain Language
State agencies must provide a clear and conspicuous public disclaimer whenever AI is used to make decisions about citizens or businesses, to inform a decision or produce an output, or to produce publicly accessible information. This is a broad AI use disclosure obligation — it applies not only to consequential decisions but to any AI-generated output accessible to the public. The trigger is very wide: any AI involvement in producing citizen-facing information or decisions requires disclosure.
Statutory Text
(6) (a) A department, agency, or administrative body shall disclose to the public, through a clear and conspicuous disclaimer, when generative artificial intelligence, artificial intelligence systems, or other artificial intelligence-related capabilities are used: 1. To render any decision regarding individual citizens or businesses within the state; 2. In any process, or to produce materials used by the system or humans, to inform a decision or create an output; or 3. To produce information or outputs accessible by citizens and businesses.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.4H-01.5 · Government · Government System
Section 3(6)(b)
Plain Language
When a state agency AI system makes decisions affecting Kentucky citizens, the agency must: (1) explain how AI is used in the decision-making process, (2) disclose the extent of human involvement in validating the decision, and (3) provide readily available appeal options for individuals subject to consequential AI-involved decisions. This creates both a transparency obligation (explaining AI's role and human oversight level) and an appeal right for individuals affected by consequential automated decisions.
Statutory Text
(b) When an artificial intelligence system makes external decisions related to citizens of the Commonwealth, a department, agency, or administrative body shall: 1. Disclose how artificial intelligence is used in the decision-making process; 2. Provide the extent of human involvement in validating and oversight of any decision made; and 3. Make readily available options for individuals to appeal a consequential decision that involves artificial intelligence.
G-02 Public Transparency & Documentation · G-02.1 · Government · Government System
Section 3(6)(c)
Plain Language
Public disclaimers about government AI use must also include information about any third-party AI products or programs involved, including documentation on how the high-risk AI or generative AI system works — such as system cards or other developer-provided documentation. This effectively requires state agencies to pass through developer-provided documentation (e.g., model cards) as part of their public disclosures.
Statutory Text
(c) Any disclaimer under paragraph (a) of this subsection shall also provide information regarding third-party artificial intelligence products or programs, including but not limited to information as to how the high-risk artificial intelligence system or generative artificial intelligence system works, such as system cards or other documented information provided by developers.
G-01 AI Governance Program & Documentation · G-01.2 · Government · Government System
Section 3(7)
Plain Language
The Commonwealth Office of Technology must establish legal and ethical framework policies ensuring all state agency AI systems comply with existing laws, regulations, and guidelines. These policies must be updated at least annually to keep pace with evolving technology and industry best practices. This is a continuing governance maintenance obligation with a mandatory annual review cycle.
Statutory Text
(7) The Commonwealth Office of Technology shall establish policies to encompass legal and ethical frameworks to ensure that any artificial intelligence systems shall align with existing laws, administrative regulations, and guidelines, which shall be updated at least annually to maintain compliance as technology and industry best practices evolve.
G-01 AI Governance Program & Documentation · G-01.1 · Government · Government System
Section 3(8)(a)-(b)
Plain Language
State agencies may not use a high-risk AI system to make a consequential decision without first designing and implementing a risk management policy and program. The policy must specify governing principles, processes, and responsible personnel, and must identify, mitigate, and document any bias risks in consequential decision-making. The policy must adhere to ISO/IEC 42001 or another recognized international AI risk management framework, and must be scaled to the deployer's size and complexity, the system's nature and intended use, and the sensitivity and volume of data processed. This is a mandatory prerequisite — no high-risk AI consequential decision is permitted without a conforming risk management program in place.
Statutory Text
(8) (a) Operating standards for utilization of high-risk artificial intelligence systems shall prohibit the use of a high-risk artificial intelligence system to render a consequential decision without the design and implementation of a risk management policy and program for high-risk artificial intelligence systems. The risk management policy shall: 1. Specify principles, process, and personnel that shall be utilized to maintain the risk management program; and 2. Identify, mitigate, and document any bias or potential bias that is a potential consequence of use in making a consequential decision. (b) Each risk management policy designed and implemented shall at a minimum adhere to the latest version of Standard ISO/IEC 42001 of the International Organization for Standardization, or another national or internationally recognized risk management framework for artificial intelligence systems, and consider the: 1. Size and complexity of the deployer; 2. Nature, scope, and intended use of the high-risk artificial intelligence system and its deployer; and 3. Sensitivity and volume of data processed.
Other · Government System
Section 3(9)
Plain Language
Nothing in the AI governance provisions (Sections 1–3) requires disclosure of trade secrets, confidential or proprietary AI design or use information, or information that would create a security risk. This is a carve-out that limits the transparency and documentation obligations elsewhere in the Act — it creates no new compliance obligation of its own.
Statutory Text
(9) Sections 1 to 3 of this Act shall not be construed to require the disclosure of trade secrets, confidential or proprietary information about the design or use of an artificial intelligence system, or any information which would create a security risk.
Other · Government · Government System
Section 3(10)
Plain Language
The Commonwealth Office of Technology must provide education and training to state employees on AI benefits, risks, and acceptable use policies. This is a workforce readiness obligation rather than a system-level compliance requirement.
Statutory Text
(10) The Commonwealth Office of Technology shall provide education and training of employees about the benefits and risks of artificial intelligence and allowable use policies.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Government · Government System
Section 3(11)(a)-(b)
Plain Language
By December 1, 2025, and annually thereafter, the Commonwealth Office of Technology must report to the Legislative Research Commission and the Interim Joint Committee on State Government. The report must include the AI registry (inventory and use cases), all applications received for AI use with approval/disapproval decisions and rationales, and third-party AI developers and contractors submitted for review. To compile this report, each state department and agency must submit a report to the Office identifying potential AI deployment use cases with benefit and risk descriptions. This creates both a bottom-up reporting obligation on individual agencies and a top-down annual legislative reporting obligation on the Office.
Statutory Text
(11) (a) The Commonwealth Office of Technology shall transmit reports to the Legislative Research Commission and the Interim Joint Committee on State Government by December 1, 2025, and annually every year thereafter. The reports shall include: 1. The artificial intelligence registry, which shall include the current inventory and use case of artificial intelligence utilized in state government; 2. Applications received for use of artificial intelligence, including the decision and rationale in approving or disapproving a request in compliance with subsection (5)(c) of this section; and 3. Third-party artificial intelligence developers, system administrators, providers, and contractors submitted for review in compliance with subsection (5) of this section. (b) To facilitate the report in paragraph (a) of this subsection, the Commonwealth Office of Technology shall receive from each department, agency, and administrative body a report examining and identifying potential use cases for the deployment of generative artificial intelligence systems and high-risk artificial intelligence systems, including a description of the benefits and risks to individuals, communities, government, and government employees.
Other · Government · Government System
Section 3(12)
Plain Language
The Commonwealth Office of Technology must promulgate implementing administrative regulations by December 1, 2025. This is a rulemaking directive to the agency — it creates no direct compliance obligation for deployers or developers but signals that detailed regulatory requirements will follow.
Statutory Text
(12) The Commonwealth Office of Technology shall promulgate administrative regulations in accordance with KRS Chapter 13A to implement this section and Section 2 of this Act by December 1, 2025.
G-01 AI Governance Program & Documentation · Government · Government System
KRS 42.726(2)(q)
Plain Language
The Commonwealth Office of Technology must establish, publish, maintain, and implement comprehensive policy standards and procedures for responsible, ethical, and transparent use of generative AI and high-risk AI by state agencies. These standards must cover procurement, implementation, ongoing assessment, data security and privacy, and acceptable use guidelines for high-risk AI integration. This is the enabling authority for the Office's AI governance role, codified as an ongoing duty.
Statutory Text
(q) Establishing, publishing, maintaining, and implementing comprehensive policy standards and procedures for the responsible, ethical, and transparent use of generative artificial intelligence systems and high-risk artificial intelligence systems by departments, agencies, and administrative bodies, including but not limited to policy standards and procedures that: 1. Govern their procurement, implementation, and ongoing assessment; 2. Address and provide resources for security of data and privacy; and 3. Create guidelines for acceptable use policies for integrating high-risk artificial intelligence systems;
PS-01 Government AI Accountability · PS-01.4 · Government · Government System
Section 3(4)
Plain Language
All state departments, agencies, and administrative bodies are subject to mandatory review of their generative AI and high-risk AI systems by the Commonwealth Office of Technology. This creates a centralized audit and oversight authority over all state agency AI deployments, functioning as an internal procurement and compliance review requirement.
Statutory Text
(4) To maintain and secure the technology infrastructure, information technology, information resources, and personal information, all departments, agencies, and administrative bodies shall be subject to review of generative artificial intelligence systems or high-risk artificial intelligence systems.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.6 · Deployer · Content GenerationPolitical Advertising
Section 5(1)(a)-(b), (4)
Plain Language
Any candidate for elected office whose appearance, action, or speech is altered using synthetic media (AI-generated deepfakes using generative adversarial networks) in an electioneering communication may sue the sponsor for injunctive relief requiring a clear and conspicuous disclosure that synthetic media was used. The court may award attorney's fees and costs to the prevailing party, and other remedies are not precluded. An affirmative defense exists if the communication already includes such a disclosure. The electioneering communication must occur within 45 days of a primary or regular election and target the relevant electorate. The plaintiff bears the burden of proving synthetic media use by clear and convincing evidence. Notably, the definition of 'synthetic media' is limited to GAN techniques — other AI generation methods may not be covered.
Statutory Text
(1) (a) Any candidate for any elected office whose appearance, action, or speech is altered through the use of synthetic media in an electioneering communication may seek injunctive or other equitable relief against the sponsor of the electioneering communication requiring that the communication includes a disclosure that is clear and conspicuous and included in, or alongside and associated with, the content in a manner that is likely to be noticed by the user. (b) The court may award a prevailing party reasonable attorney's fees and costs. This paragraph does not limit or preclude a plaintiff from securing or recovering any other available remedy. (4) It is an affirmative defense for any action brought under subsection (1) of this section that the electioneering communication containing synthetic media includes a disclosure that is clear and conspicuous and included in, or alongside and associated with, the content in a manner that is likely to be noticed by the user.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.6 · Deployer · Content GenerationPolitical Advertising
Section 5(2)(a)-(b), (3), (5)(a)-(b)
Plain Language
This provision establishes the procedural framework and liability allocation for synthetic media election claims. Plaintiffs must file in their county Circuit Court and prove synthetic media use by clear and convincing evidence. Media distributors and their advertising sales representatives are generally shielded from liability unless they (1) intentionally remove a synthetic media disclosure and fail to remedy upon notice, or (2) alter content to create synthetic media. Failure to comply with a court-ordered disclosure requirement triggers penalties under KRS 121.990(3). Federally licensed broadcasters subject to 47 U.S.C. § 315 receive additional protection. This allocates liability primarily to the sponsor, with secondary liability for media platforms only in cases of affirmative misconduct.
Statutory Text
(2) In any action brought under subsection (1) of this section: (a) The plaintiff shall: 1. File in Circuit Court of the county in which he or she resides; and 2. Bear the burden of establishing the use of synthetic media by clear and convincing evidence. (b) The following shall not be liable except as provided in subsection (3) of this section: 1. The medium disseminating the electioneering communication; and 2. An advertising sales representative of such medium. (3) Failure to comply with an order of the court to include the required disclosure herein shall be subject to the penalties set for KRS 121.990(3) for violation of KRS 121.190(1). (5) Except when a licensee, programmer, or operator of a federally licensed broadcasting station transmits an electioneering communication that is subject to 47 U.S.C. sec. 315, a medium or its advertising sales representative may be held liable in a cause of action brought under subsection (1) of this section if: (a) The person intentionally removes any disclosure described in subsection (4) of this section from the electioneering communication it disseminates and does not remove the electioneering communication or replace the disclosure when notified; or (b) Subject to affirmative defenses described in subsection (4) of this section, the person changes the content of an electioneering communication in a manner that results in it qualifying as synthetic media.
Other · Content GenerationPolitical Advertising
Section 5(6)(a)-(c)
Plain Language
Interactive computer services (platforms) are not treated as publishers of third-party content containing synthetic media, consistent with federal Section 230 protections. However, platforms may face liability under subsection (3) if they fail to comply with a court order to include a required disclosure. This provision mirrors and incorporates federal CDA § 230 immunity while carving out a narrow exception for court-order noncompliance. It creates no new affirmative compliance obligation for platforms.
Statutory Text
(6) (a) A provider or user of an interactive computer service shall not be treated as the publisher or speaker of any information provided by another information content provider. (b) An interactive computer service may be held liable in accordance with subsection (3) of this section. (c) An interactive computer service shall be exempt as provided by the Communications and Decency Act of 1996, as amended, 47 U.S.C. sec. 230.