AB-2575
CA · State · USA
CA
USA
● Pending
Proposed Effective Date
2027-01-01
California AB 2575 — Health care services: artificial intelligence
Imposes disclosure, worker protection, and liability obligations on health facilities, clinics, physician's offices, and group practices that use AI or clinical decision support systems (collectively 'covered tools') in patient care. Health care entities must disclose detailed information about covered tools — including developer identity, training data characteristics, known biases, validation processes, and override rights — to any licensed professional or person using or viewing outputs from the tool. The bill prohibits employers from using technology to replace or limit a worker's professional judgment in patient care and prohibits retaliation against workers who override AI outputs or comply with employer-approved technology. A separate civil liability provision bars defendants who developed, modified, selected, or deployed AI or clinical decision support systems from asserting that a health care worker's failure to override the AI's output is a superseding cause severing the defendant's liability.
Summary

Imposes disclosure, worker protection, and liability obligations on health facilities, clinics, physician's offices, and group practices that use AI or clinical decision support systems (collectively 'covered tools') in patient care. Health care entities must disclose detailed information about covered tools — including developer identity, training data characteristics, known biases, validation processes, and override rights — to any licensed professional or person using or viewing outputs from the tool. The bill prohibits employers from using technology to replace or limit a worker's professional judgment in patient care and prohibits retaliation against workers who override AI outputs or comply with employer-approved technology. A separate civil liability provision bars defendants who developed, modified, selected, or deployed AI or clinical decision support systems from asserting that a health care worker's failure to override the AI's output is a superseding cause severing the defendant's liability.

Enforcement & Penalties
Enforcement Authority
Multiple enforcement authorities depending on entity type. Violations by licensed health facilities are enforced under Health & Safety Code Article 4 (commencing with § 1290). Violations by licensed clinics are enforced under Health & Safety Code Article 4 (commencing with § 1235). Violations by physicians are subject to the jurisdiction of the Medical Board of California or the Osteopathic Medical Board of California. Violations also constitute unfair competition under Business & Professions Code § 17200, enforceable by the Attorney General, district attorneys, county counsel, and city attorneys, and by private parties under the UCL's standing requirements (suffered injury in fact and lost money or property). Workers subject to retaliation or discrimination may file a complaint with the Labor Commissioner.
Penalties
Health facility and clinic violations are subject to existing enforcement penalties under the Health & Safety Code. Violations constitute unfair competition under Business & Professions Code § 17200, which provides for injunctive relief, restitution, and civil penalties up to $2,500 per violation for intentional conduct. The bill does not create a standalone private right of action with statutory damages. Workers subject to retaliation may file complaints with the Labor Commissioner for remedies available under the Labor Code. Civil Code § 1714.48 addresses liability defenses in tort actions but does not itself create a new cause of action or specify damages.
Who Is Covered
What Is Covered
"Covered tool" means artificial intelligence or a clinical decision support system.
"Artificial intelligence" means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
"Clinical decision support system" means a computerized system or tool that does both of the following: (A) Supports decisionmaking related to patient care based on algorithms, or models, based in clinical practice guidelines or that derive relationships from training data, including such algorithms or models that are developed using unsupervised learning models. (B) Produces an output that results in a prediction, classification, recommendation, evaluation, or analysis.
Compliance Obligations 6 obligations · click obligation ID to open requirement page
G-02 Public Transparency & Documentation · G-02.1 · Deployer · Healthcare
Health & Safety Code § 1339.76(a)-(c)
Plain Language
Health facilities, clinics, physician's offices, and group practices that use or deploy AI or clinical decision support systems in patient care must provide a comprehensive disclosure to any licensed health care professional or other person who uses the tool or views its outputs. The disclosure must cover twelve categories of information: developer identity and funding; intended use and patient population; out-of-scope uses and known risks; inputs and output generation methods; training data details including demographic representativeness and bias; validation processes; performance measures; ongoing maintenance and update procedures; a notice of health care entity and developer liability; and a notice that the worker may override the tool's output based on professional judgment. The disclosure must be provided at the time of tool use, in plain language, linked in the patient's health record, and with sufficient time for the professional to make informed decisions. This functions as a detailed model card requirement specific to health care AI tools, directed at the deploying health care entity rather than the AI developer.
Statutory Text
(a) A health facility, clinic, physician's office, or office of a group practice that uses or deploys a covered tool for patient care shall disclose required information, described in subdivision (b), to any licensed health care professional or other person using a covered tool or viewing outputs from a covered tool. (b) Required information under subdivision (a) shall include all of the following: (1) Details on the covered tool, including developer, funding source, any foundation model used, and description of output. (2) Intended use of the covered tool, including intended patient population, intended users, and intended decisionmaking role. (3) Cautioned out-of-scope use of the covered tool, including known risks and limitations. (4) List of the inputs into the covered tool. (5) Description of how the covered tool generates outputs. (6) Development details of the covered tool, including, but not limited to, all of the following: (A) Description of the training set or clinical research underlying recommendations, including demographic representativeness and known biases based on protected characteristics. (B) Description of the relevance of training data to deployed setting. (C) Process used to ensure fairness in development of the intervention. (7) Description of the validation process. (8) Qualitative measures of performance. (9) Description of ongoing maintenance of intervention implementation and use. (10) Description of updates and continued validation or fairness assessment process. (11) Notice that health care entities and developers are liable for harm that results from the use of artificial intelligence in patient care. (12) Notice that a worker providing direct patient care is permitted to override the output of a covered tool if, in the judgment of the worker acting in their scope of practice, such an override is appropriate for the patient, or as necessary to comply with applicable law, including civil rights law. (c) (1) A disclosure made pursuant to this section shall be provided at the time the licensed health care professional or other person uses the covered tool or views any recommendation or output generated by the covered tool. (2) The disclosure shall be provided in plain language to, and linked in the health record of, any patient whose care was affected by the output of the covered tool or whose health information was used as an input to the covered tool. (3) The disclosure shall be provided with ample time for the licensed health care professional or other person to review and make reasoned decisions based on their professional judgment on whether and how to use the covered tool.
H-01 Human Oversight of Automated Decisions · H-01.6 · Deployer · Healthcare
Labor Code § 2821(c)
Plain Language
Employers may not use or deploy AI, clinical decision support systems, or other technology in a manner that replaces or limits a health care worker's exercise of professional judgment in patient care. This is an affirmative prohibition — the employer must ensure that technology is deployed as a supplement to, not a replacement for, clinical judgment. In practice, this means AI outputs in patient care settings must remain advisory and cannot be treated as binding directives that override worker discretion. This goes beyond requiring that human review be available upon request — it categorically prohibits technology from supplanting professional judgment.
Statutory Text
(c) An employer shall not use or deploy technology to replace or limit a worker's use of professional judgment in patient care.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · Deployer · Healthcare
Labor Code § 2821(d)-(e)
Plain Language
Employers are prohibited from retaliating or discriminating against a health care worker who provides direct patient care in two scenarios: (1) the worker overrides or requests to override AI or technology output based on their professional judgment or to comply with applicable law (including civil rights law), or (2) the worker complies with the output of technology that the employer itself selected or approved. Workers who experience retaliation or discrimination may file a complaint with the Labor Commissioner. This creates a two-directional shield — a worker is protected both for overriding AI and for following it, provided the relevant conditions are met.
Statutory Text
(d) An employer shall not retaliate or discriminate against a worker providing direct patient care based on both of the following: (1) The worker's override of, or request to override, the output of technology if, in the judgment of the worker acting in their scope of practice, such an override is appropriate for the patient, or as necessary to comply with applicable law, including civil rights law. (2) The worker's compliance with the output of technology if the technology was provided or approved by the worker's employer for patient care. (e) A worker who is subject to retaliation or discrimination in violation of this article has the right under this article to file a complaint with the Labor Commissioner against an employer who retaliates or discriminates against the worker.
Other · Healthcare
Civil Code § 1714.48(b)-(c)
Plain Language
Defendants who developed, modified, selected, or deployed AI or clinical decision support systems cannot argue that a health care worker's failure to override the AI's output constitutes a superseding cause that breaks the chain of liability. In other words, an AI developer or deployer cannot shift blame to the clinician who relied on the tool's output. Defendants retain all other affirmative defenses — including general causation, foreseeability, and comparative fault arguments — so this provision narrows the available defenses rather than creating strict liability. This eliminates a specific tort defense but imposes no new affirmative compliance obligation.
Statutory Text
(b) In an action against a defendant who developed, modified, selected, or deployed artificial intelligence or a clinical decision support system that is alleged to have caused harm to the plaintiff, it shall not be a defense, and the defendant may not assert, that the failure of a licensed health care professional or other health care worker to override an output of the artificial intelligence or clinical decision support system is a superseding cause severing the defendant's liability for the alleged harm. (c) This section does not limit or preclude a defendant from presenting either of the following: (1) Any other affirmative defense, including evidence relevant to causation or foreseeability. (2) Other evidence relevant to the comparative fault of any other person or entity.
Other · Healthcare
Labor Code § 2821(a)-(b)
Plain Language
These are legislative policy declarations establishing that California's policy is to protect health care workers' professional judgment and shield workers from penalty when relying in good faith on employer-selected technology. These statements provide interpretive context for the operative prohibitions in subsections (c) through (e) but do not independently create compliance obligations.
Statutory Text
(a) It is the public policy of the State of California that a worker providing direct patient care be free to use their professional judgment to make assessments and decisions within their scope of practice as appropriate for their patients. (b) It is the public policy of the State of California that a worker providing direct patient care should not be penalized for relying in good faith on technology that the licensed health care professional's employer has selected or approved for their use in patient care.
Other · Healthcare
Health & Safety Code § 1339.76(d)(1)-(4)
Plain Language
This provision specifies which enforcement authorities handle violations of the AI disclosure requirements: the Department of Public Health for health facilities and clinics (under existing licensure enforcement frameworks), the Medical Board or Osteopathic Board for physicians, and any UCL enforcer (Attorney General, district attorneys, or private parties with standing) for unfair competition claims. It activates existing enforcement mechanisms but creates no new affirmative compliance obligation.
Statutory Text
(d) (1) A violation of this section by a licensed health facility is subject to the enforcement mechanisms described in Article 4 (commencing with Section 1290) of Chapter 2. (2) A violation of this section by a licensed clinic is subject to the enforcement mechanisms described in Article 4 (commencing with Section 1235) of Chapter 1. (3) A violation of this section by a physician is subject to the jurisdiction of the Medical Board of California or the Osteopathic Medical Board of California, as appropriate. (4) A violation of this section constitutes "unfair competition" as defined in Section 17200 of the Business and Professions Code and is punishable as prescribed in Chapter 5 (commencing with Section 17200) of Part 2 of Division 7 of the Business and Professions Code.