HB-713
VA · State · USA
VA
USA
● Pending
Proposed Effective Date
2027-07-01
Virginia HB 713 — Fostering Access, Innovation, and Responsibility in Artificial Intelligence Act (FAIR AI Act)
Virginia HB 713 establishes the FAIR AI Act, which imposes a disclosure obligation on developers of base artificial intelligence models requiring them to publish specified model information in their terms of service. The bill eliminates 'the AI did it' as a defense in civil and criminal actions for both developers and deployers, preventing defendants from arguing that autonomous AI conduct shields them from liability. It creates the FAIR AI Enforcement Fund, administered by the Attorney General, to support state agency enforcement against AI system misuse, bias, and workforce disruption. The bill does not create a private right of action, nor does it specify monetary penalties or remedies. The bill was left in committee and has not been enacted.
Summary

Virginia HB 713 establishes the FAIR AI Act, which imposes a disclosure obligation on developers of base artificial intelligence models requiring them to publish specified model information in their terms of service. The bill eliminates 'the AI did it' as a defense in civil and criminal actions for both developers and deployers, preventing defendants from arguing that autonomous AI conduct shields them from liability. It creates the FAIR AI Enforcement Fund, administered by the Attorney General, to support state agency enforcement against AI system misuse, bias, and workforce disruption. The bill does not create a private right of action, nor does it specify monetary penalties or remedies. The bill was left in committee and has not been enacted.

Enforcement & Penalties
Enforcement Authority
The FAIR AI Enforcement Fund is administered by the Attorney General, who authorizes expenditures for state agency enforcement of AI system misuse, bias, and workforce disruption. No private right of action is created by the statute. No specific complaint-driven or agency-initiated enforcement mechanism is detailed beyond the Fund structure.
Penalties
The statute does not specify monetary penalties, civil penalty amounts, statutory damages, or specific remedies. It creates an enforcement fund for state agency enforcement activities but does not enumerate available damages or remedies.
Who Is Covered
"Deployer" means any person doing business in the Commonwealth that deploys or uses an artificial intelligence system to make a consequential decision in the Commonwealth.
"Developer" means any person doing business in the Commonwealth that develops or modifies an artificial intelligence system that is offered, sold, leased, given, or otherwise made available to deployers or consumers in the Commonwealth.
What Is Covered
"Artificial intelligence system" means any machine learning-based system that, for any explicit or implicit objective, infers from the inputs such system receives how to generate outputs, including content, decisions, predictions, and recommendations, that can influence physical or virtual environments. "Artificial intelligence system" does not include any artificial intelligence system or base artificial intelligence model that is used for development, prototyping, and research activities before such artificial intelligence system or base artificial intelligence model is made available to deployers or consumers.
"Base artificial intelligence model" means a large-scale artificial intelligence system model trained on broad and diverse data to learn general patterns and relationships, serving as a foundational platform adaptable for various specific tasks or applications.
Compliance Obligations 3 obligations · click obligation ID to open requirement page
G-02 Public Transparency & Documentation · G-02.1 · Developer · Foundation Model
§ 59.1-615(A)-(B)
Plain Language
Developers of base AI models must publish seven enumerated items clearly and conspicuously in the model's terms of service: the model name, developer name, developer's incorporation location, most recent version release date, training data update date, supported languages, and a link to the terms of service. The disclosure must be appropriate for the medium and easily accessible to users. Importantly, making this disclosure does not insulate the developer from liability — subsection B explicitly states that providing the disclosure is not a defense to harm claims.
Statutory Text
A. A developer of a base artificial intelligence model shall clearly and conspicuously disclose, in a manner that is appropriate for the medium of the content and is easily accessible to the user of such model, in the terms of service governing the use of such model: 1. The name of the model; 2. The developer of the model; 3. The location where the developer is incorporated; 4. The release date of the most recent version of the model; 5. The date that the model's training data was most recently updated; 6. Supported languages for the model; and 7. A link to the model's terms of service. B. The provision of such disclosure to a user shall not be a defense to liability for any harm caused to a plaintiff.
Other · Foundation ModelAutomated Decisionmaking
§ 59.1-617(A)-(C)
Plain Language
Developers cannot defend against liability claims by arguing that the AI system autonomously caused the harm (subsection A). Deployers cannot defend by arguing that the AI system, rather than the deployer, caused the harm (subsection B). Both provisions close the 'the AI did it' defense loophole. However, subsection C preserves traditional common law defenses, including intervening or superseding cause by another individual. This provision modifies the litigation landscape for AI-related harm but does not create an affirmative compliance obligation.
Statutory Text
A. In any criminal or civil action against a defendant that is alleged to have developed or modified an artificial intelligence system that caused harm to a plaintiff, it shall not be a defense that the artificial intelligence system autonomously caused such harm to the plaintiff. B. In any criminal or civil action against a defendant that is alleged to have deployed or otherwise used an artificial intelligence system that caused harm to a plaintiff, it shall not be a defense that the alleged harm was caused by the artificial intelligence system. C. Nothing in this section shall prevent a defendant from asserting existing available defenses at common law, including that the harm to a plaintiff was caused by the intervening or superseding conduct of another individual.
Other · Foundation ModelAutomated Decisionmaking
§ 59.1-616
Plain Language
The statute creates the FAIR AI Enforcement Fund in the state treasury, a non-reverting special fund dedicated to supporting state agency enforcement activities related to AI misuse, bias, and workforce disruption. The Attorney General authorizes disbursements. This is a government fiscal mechanism — it creates no compliance obligation for developers or deployers but signals the state's intent to fund AI enforcement activities.
Statutory Text
There is hereby created in the state treasury a special nonreverting fund to be known as the FAIR AI Enforcement Fund. The Fund shall be established on the books of the Comptroller. All funds appropriated for such purpose shall be paid into the state treasury and credited to the Fund. Interest earned on moneys in the Fund shall remain in the Fund and be credited to it. Any moneys remaining in the Fund, including interest thereon, at the end of each fiscal year shall not revert to the general fund but shall remain in the Fund. Moneys in the Fund shall be used solely for the purpose of supporting state agency enforcement of artificial intelligence system misuse, bias, and workforce disruption. Expenditures and disbursements from the Fund shall be made by the State Treasurer on warrants issued by the Comptroller upon written request signed by the Attorney General.