SB-149
UT · State · USA
UT
USA
● Enacted
Effective Date
2024-05-01
Utah S.B. 149 — Artificial Intelligence Amendments (Artificial Intelligence Policy Act)
Utah SB 149 creates the Artificial Intelligence Policy Act with three main pillars. First, it establishes that using generative AI is not a defense to consumer protection violations and requires persons using generative AI in consumer-facing activities to disclose AI involvement on request; persons providing services of a regulated occupation must proactively disclose AI use verbally at the start of oral interactions and electronically before written exchanges. Second, it creates the Office of Artificial Intelligence Policy within the Department of Commerce and an AI Learning Laboratory Program — a regulatory sandbox that allows participants to test AI technologies under temporary regulatory mitigation agreements for up to 24 months. Third, it amends the criminal code to confirm that offenses committed with the aid of generative AI may be charged against the human actor. Enforcement is agency-only through the Division of Consumer Protection with fines up to $2,500 per violation ($5,000 for order violations); no private right of action exists. Note that Chapter 70 (the Office and Learning Laboratory) was enacted with a sunset date of May 1, 2025.
Summary

Utah SB 149 creates the Artificial Intelligence Policy Act with three main pillars. First, it establishes that using generative AI is not a defense to consumer protection violations and requires persons using generative AI in consumer-facing activities to disclose AI involvement on request; persons providing services of a regulated occupation must proactively disclose AI use verbally at the start of oral interactions and electronically before written exchanges. Second, it creates the Office of Artificial Intelligence Policy within the Department of Commerce and an AI Learning Laboratory Program — a regulatory sandbox that allows participants to test AI technologies under temporary regulatory mitigation agreements for up to 24 months. Third, it amends the criminal code to confirm that offenses committed with the aid of generative AI may be charged against the human actor. Enforcement is agency-only through the Division of Consumer Protection with fines up to $2,500 per violation ($5,000 for order violations); no private right of action exists. Note that Chapter 70 (the Office and Learning Laboratory) was enacted with a sunset date of May 1, 2025.

Enforcement & Penalties
Enforcement Authority
Utah Division of Consumer Protection administers and enforces § 13-2-12. The division director may impose administrative fines; the division may bring court actions. Violation of an administrative or court order is enforceable by the attorney general. No private right of action is created. Criminal liability under § 76-2-107 is enforced through existing criminal prosecution channels.
Penalties
Administrative fines up to $2,500 per violation. In court actions: declaratory relief, injunctive relief, disgorgement of money received in violation (payable to injured persons), fines up to $2,500 per violation, reasonable attorney fees, court costs, and investigative fees. Violation of an administrative or court order: civil penalties up to $5,000 per violation. No statutory minimum damages for individuals.
Who Is Covered
A person who uses, prompts, or otherwise causes generative artificial intelligence to interact with a person in connection with any act administered and enforced by the division, as described in Section 13-2-1.
A person who provides the services of a regulated occupation.
A supplier in connection with a consumer transaction.
What Is Covered
"Generative artificial intelligence" means an artificial system that: (i) is trained on data; (ii) interacts with a person using text, audio, or visual communication; and (iii) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight.
"Artificial intelligence" means a machine-based system that makes predictions, recommendations, or decisions influencing real or virtual environments.
"Artificial intelligence technology" means a computer system, application, or other product that uses or incorporates one or more forms of artificial intelligence.
Compliance Obligations 9 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.3 · DeployerProfessional · Government / Public Sector
Utah Code § 13-2-12(3)
Plain Language
Any person deploying generative AI in connection with activities overseen by the Utah Division of Consumer Protection must, when asked by the person interacting with the AI, clearly and conspicuously disclose that the person is interacting with generative AI and not a human. This is an on-demand disclosure — it is triggered only when the individual asks or prompts, not proactively. Compare to the proactive disclosure required under subsection (4)(a) for regulated occupations, which does not require a user inquiry.
Statutory Text
A person who uses, prompts, or otherwise causes generative artificial intelligence to interact with a person in connection with any act administered and enforced by the division, as described in Section 13-2-1, shall clearly and conspicuously disclose to the person with whom the generative artificial intelligence interacts, if asked or prompted by the person, that the person is interacting with generative artificial intelligence and not a human.
T-01 AI Identity Disclosure · T-01.1 · DeployerProfessional · Healthcare / ClinicalFinancial Services
Utah Code § 13-2-12(4)(a)-(b), (5)
Plain Language
Providers of services in a regulated occupation (i.e., any occupation requiring a license or state certification from the Utah Department of Commerce) must proactively and prominently disclose whenever a consumer is interacting with generative AI in the delivery of those services. The disclosure must be given verbally at the start of any oral conversation and via electronic message before any written exchange. This is an unconditional proactive disclosure — unlike subsection (3), it does not require the consumer to ask. Subsection (4)(b) clarifies that this provision does not create a new authorization to provide regulated services via AI; all existing licensure and certification requirements remain in full effect.
Statutory Text
(4) (a) A person who provides the services of a regulated occupation shall prominently disclose when a person is interacting with a generative artificial intelligence in the provision of regulated services. (b) Nothing in this section permits a person to provide the services of a regulated occupation through generative artificial intelligence without meeting the requirements of the regulated occupation. (5) A disclosure described Subsection (4)(a) shall be provided: (a) verbally at the start of an oral exchange or conversation; and (b) through electronic messaging before a written exchange.
Other · Government / Public Sector
Utah Code § 13-2-12(2)
Plain Language
A person cannot avoid liability for a consumer protection violation enforced by the Division of Consumer Protection by arguing that generative AI — rather than the person — made the violative statement, undertook the violative act, or was used in furtherance of the violation. In practice, this means any company deploying generative AI in consumer-facing transactions is fully liable for the AI's outputs as though the company itself made the statements or took the actions. This is a liability attribution rule, not a new affirmative compliance obligation.
Statutory Text
It is not a defense to the violation of any statute administered and enforced by the division, as described in Section 13-2-1, that generative artificial intelligence: (a) made the violative statement; (b) undertook the violative act; or (c) was used in furtherance of the violation.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.3 · DeployerProfessional · Government / Public Sector
Utah Code § 13-11-4(2)(i)
Plain Language
The amendment to § 13-11-4(2)(i) adds 'license' and 'certification' to the list of attributes that constitute a deceptive practice if a supplier falsely claims them. In the AI context, read together with the no-defense provision in § 13-2-12(2), this means that if a generative AI system implies to a consumer that its operator holds a license or certification the operator does not possess, that constitutes a deceptive practice. Suppliers using AI must ensure their AI-generated communications do not falsely represent licensure or certification status.
Statutory Text
Without limiting the scope of Subsection (1), a supplier commits a deceptive act or practice if the supplier knowingly or intentionally: ... (i) indicates that the supplier has a sponsorship, approval, license, certification, or affiliation the supplier does not have;
R-01 Incident Reporting · R-01.1 · DeployerProfessional · Government / Public Sector
Utah Code § 13-70-304(5)
Plain Language
Learning Laboratory participants must immediately report to the Office of Artificial Intelligence Policy any incident resulting in consumer harm, a privacy breach, or unauthorized data usage. This is a continuous obligation throughout the participation period. Failure to report — or the underlying incident itself — may result in removal from the Learning Laboratory and exposure to all applicable civil and criminal penalties.
Statutory Text
A participant shall immediately report to the office any incidents resulting in consumer harm, privacy breach, or unauthorized data usage, which may result in removal of the participant from the learning laboratory.
G-01 AI Governance Program & Documentation · G-01.3 · DeployerProfessional · Government / Public Sector
Utah Code § 13-70-304(2), (4)
Plain Language
Participants in the AI Learning Laboratory must provide information to state agencies and report to the Office as specified in their participation agreement. They must also retain records as required by Office rules or the agreement. The specifics of what information, what reports, and what records will be determined by the Office's rules and the individual participation agreement — the statute delegates those details.
Statutory Text
(2) A participant shall: (a) provide required information to state agencies in accordance with the terms of the participation agreement; and (b) report to the office as required in the participation agreement. ... (4) A participant shall retain records as required by office rule or the participation agreement.
S-01 AI System Safety Program · S-01.5 · DeployerProfessional · Government / Public Sector
Utah Code § 13-70-303(1)
Plain Language
To qualify for regulatory mitigation (reduced enforcement terms) within the Learning Laboratory, a participant must affirmatively demonstrate to the Office: technical capability, sufficient financial resources, that the AI technology's consumer benefits potentially outweigh risks from relaxed enforcement, an effective risk monitoring and minimization plan, and that the proposed testing is appropriately scoped and limited based on risk assessments. These are eligibility prerequisites — the Office evaluates them before granting any mitigation agreement.
Statutory Text
To be eligible for regulatory mitigation, a participant shall demonstrate to the office that: (a) the participant has the technical expertise and capability to responsibly develop and test the proposed artificial intelligence technology; (b) the participant has sufficient financial resources to meet obligations during testing; (c) the artificial intelligence technology provides potential substantial consumer benefits that may outweigh identified risks from mitigated enforcement of regulations; (d) the participant has an effective plan to monitor and minimize identified risks from testing; and (e) the scale, scope, and duration of proposed testing is appropriately limited based on risk assessments.
Other · Government / Public Sector
Utah Code § 76-2-107
Plain Language
A person can be found criminally guilty of an offense if they commit the offense with the aid of generative AI, or if they intentionally prompt or cause generative AI to commit the offense. This closes a potential gap in criminal liability by making clear that using AI as an intermediary does not insulate the human actor from criminal prosecution. It is a criminal law attribution provision, not an affirmative compliance obligation for AI developers or deployers.
Statutory Text
(1) As used in this section, "generative artificial intelligence" means the same as that term is defined in Section 13-2-12. (2) An actor may be found guilty of an offense if: (a) the actor commits the offense with the aid of a generative artificial intelligence; or (b) the actor intentionally prompts or otherwise causes a generative artificial intelligence to commit the offense.
G-01 AI Governance Program & Documentation · G-01.1 · DeployerProfessional · Government / Public Sector
Utah Code § 13-70-302(4), (6)
Plain Language
Each regulatory mitigation agreement must specify scope limitations on the AI technology's use (user types, geographic boundaries, and other implementation constraints), safeguards that must be in place, and the specific regulatory relief granted. Critically, participants remain fully subject to every legal and regulatory requirement that the agreement does not expressly waive or modify. This provision structures the sandbox as a limited, documented departure from baseline regulation rather than a blanket exemption.
Statutory Text
(4) A regulatory mitigation agreement between a participant and the office and relevant agencies shall specify: (a) limitations on scope of the use of the participant's artificial intelligence technology, including: (i) the number and types of users; (ii) geographic limitations; and (iii) other limitations to implementation; (b) safeguards to be implemented; and (c) any regulatory mitigation granted to the applicant. ... (6) A participant remains subject to all legal and regulatory requirements not expressly waived or modified by the terms of the regulatory mitigation agreement.