AB-2575
CA · State · USA
CA
USA
● Pending
Proposed Effective Date
2027-01-01
California AB 2575 — Health care services: artificial intelligence
Imposes three sets of obligations on healthcare entities using AI or clinical decision support systems in patient care. First, health facilities, clinics, and physician offices must disclose detailed information about any AI or clinical decision support tool to healthcare professionals using or viewing its outputs, including developer details, training data characteristics, known biases, validation processes, and a notice that workers may override tool outputs. Second, employers may not use technology to replace or limit a healthcare worker's professional judgment in patient care, and may not retaliate against workers who override or comply with AI tool outputs. Third, the bill prohibits defendants who developed, modified, selected, or deployed AI or clinical decision support systems from asserting that a healthcare worker's failure to override the tool's output is a superseding cause severing the defendant's liability. Enforcement is split among the Department of Public Health, Medical Board of California, Labor Commissioner, and UCL unfair competition remedies.
Summary

Imposes three sets of obligations on healthcare entities using AI or clinical decision support systems in patient care. First, health facilities, clinics, and physician offices must disclose detailed information about any AI or clinical decision support tool to healthcare professionals using or viewing its outputs, including developer details, training data characteristics, known biases, validation processes, and a notice that workers may override tool outputs. Second, employers may not use technology to replace or limit a healthcare worker's professional judgment in patient care, and may not retaliate against workers who override or comply with AI tool outputs. Third, the bill prohibits defendants who developed, modified, selected, or deployed AI or clinical decision support systems from asserting that a healthcare worker's failure to override the tool's output is a superseding cause severing the defendant's liability. Enforcement is split among the Department of Public Health, Medical Board of California, Labor Commissioner, and UCL unfair competition remedies.

Enforcement & Penalties
Enforcement Authority
Multiple enforcement authorities by provision: (1) Disclosure violations by licensed health facilities are enforced by the State Department of Public Health under Health & Safety Code Article 4 (commencing with § 1290). (2) Disclosure violations by licensed clinics are enforced under Health & Safety Code Article 4 (commencing with § 1235). (3) Disclosure violations by physicians are subject to the jurisdiction of the Medical Board of California or the Osteopathic Medical Board of California. (4) Disclosure violations constitute unfair competition under Bus. & Prof. Code § 17200, enforceable by the Attorney General, district attorneys, county counsel, and city attorneys, as well as by private persons under the UCL's standing requirements. (5) Labor Code anti-retaliation violations are enforced by the Labor Commissioner upon worker complaint. (6) Civil Code § 1714.48 modifies available defenses in private tort litigation — no separate enforcement authority is designated; it applies in the context of a civil action brought by a harmed plaintiff.
Penalties
No express private right of action is created by the bill. Disclosure violations constitute unfair competition under Bus. & Prof. Code § 17200, which provides for injunctive relief and restitution (but not damages) in private UCL actions, plus civil penalties of up to $2,500 per violation in public enforcement actions. Workers subject to retaliation may file complaints with the Labor Commissioner, who may order reinstatement, back pay, and other appropriate relief under existing Labor Code enforcement mechanisms. The Civil Code § 1714.48 provision modifies tort defenses — actual damages in underlying tort actions are determined by general tort principles.
Who Is Covered
"Clinic" has the same meaning as defined in Section 1200.
"Health facility" has the same meaning as defined in Section 1250.
"Office of a group practice" has the same meaning as defined in Section 1339.75.
"Physician's office" has the same meaning as defined in Section 1339.75.
What Is Covered
"Artificial intelligence" means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
"Clinical decision support system" means a computerized system or tool that does both of the following: (A) Supports decisionmaking related to patient care based on algorithms, or models, based in clinical practice guidelines or that derive relationships from training data, including such algorithms or models that are developed using unsupervised learning models. (B) Produces an output that results in a prediction, classification, recommendation, evaluation, or analysis.
"Covered tool" means artificial intelligence or a clinical decision support system.
"Technology" means scientific hardware or software, including artificial intelligence and clinical decision support systems, used to achieve a medical or nursing care objective at a health facility.
Compliance Obligations 4 obligations · click obligation ID to open requirement page
G-02 Public Transparency & Documentation · G-02.1 · Deployer · Healthcare
Health & Safety Code § 1339.76(a)-(c)
Plain Language
Health facilities, clinics, physician offices, and group practice offices that use any AI or clinical decision support system for patient care must provide a comprehensive disclosure to every healthcare professional or other person who uses the tool or views its outputs. The disclosure must cover twelve categories of information: developer and funding details, intended use and patient population, out-of-scope risks and limitations, system inputs and output generation methods, training data characteristics including demographic representativeness and known biases, fairness processes, validation methodology, performance measures, ongoing maintenance plans, update and continued validation processes, a liability notice, and a notice that direct patient care workers may override the tool's output. The disclosure must be provided at the time of use, in plain language, linked to the patient's health record, and with sufficient time for the professional to make reasoned decisions about whether and how to use the tool.
Statutory Text
(a) A health facility, clinic, physician's office, or office of a group practice that uses or deploys a covered tool for patient care shall disclose required information, described in subdivision (b), to any licensed health care professional or other person using a covered tool or viewing outputs from a covered tool. (b) Required information under subdivision (a) shall include all of the following: (1) Details on the covered tool, including developer, funding source, any foundation model used, and description of output. (2) Intended use of the covered tool, including intended patient population, intended users, and intended decisionmaking role. (3) Cautioned out-of-scope use of the covered tool, including known risks and limitations. (4) List of the inputs into the covered tool. (5) Description of how the covered tool generates outputs. (6) Development details of the covered tool, including, but not limited to, all of the following: (A) Description of the training set or clinical research underlying recommendations, including demographic representativeness and known biases based on protected characteristics. (B) Description of the relevance of training data to deployed setting. (C) Process used to ensure fairness in development of the intervention. (7) Description of the validation process. (8) Qualitative measures of performance. (9) Description of ongoing maintenance of intervention implementation and use. (10) Description of updates and continued validation or fairness assessment process. (11) Notice that health care entities and developers are liable for harm that results from the use of artificial intelligence in patient care. (12) Notice that a worker providing direct patient care is permitted to override the output of a covered tool if, in the judgment of the worker acting in their scope of practice, such an override is appropriate for the patient, or as necessary to comply with applicable law, including civil rights law. (c) (1) A disclosure made pursuant to this section shall be provided at the time the licensed health care professional or other person uses the covered tool or views any recommendation or output generated by the covered tool. (2) The disclosure shall be provided in plain language to, and linked in the health record of, any patient whose care was affected by the output of the covered tool or whose health information was used as an input to the covered tool. (3) The disclosure shall be provided with ample time for the licensed health care professional or other person to review and make reasoned decisions based on their professional judgment on whether and how to use the covered tool.
H-01 Human Oversight of Automated Decisions · H-01.6 · Deployer · Healthcare
Labor Code § 2821(a), (c)
Plain Language
Employers may not use or deploy AI, clinical decision support systems, or other healthcare technology in a way that replaces or limits a direct patient care worker's ability to exercise professional judgment within their scope of practice. This is an affirmative prohibition — the employer cannot design workflows, policies, or system configurations that effectively override or constrain the clinician's independent judgment. The policy declaration in subdivision (a) provides interpretive context: the legislature's intent is that clinicians retain autonomy over patient care decisions even when AI tools are deployed. This effectively requires that any AI tool used in patient care operate in an advisory capacity, with the clinician retaining final decision-making authority.
Statutory Text
(a) It is the public policy of the State of California that a worker providing direct patient care be free to use their professional judgment to make assessments and decisions within their scope of practice as appropriate for their patients. (c) An employer shall not use or deploy technology to replace or limit a worker's use of professional judgment in patient care.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · Deployer · Healthcare
Labor Code § 2821(b), (d), (e)
Plain Language
Employers are prohibited from retaliating or discriminating against direct patient care workers in two scenarios: (1) when the worker overrides or requests to override an AI or clinical decision support tool's output based on their professional judgment or to comply with applicable law, and (2) when the worker complies with the output of employer-approved technology. This is a dual-protection anti-retaliation provision — workers are shielded whether they follow or deviate from the AI's recommendation. Workers who experience retaliation may file a complaint with the Labor Commissioner. The policy declaration in subdivision (b) provides additional interpretive context that good-faith reliance on employer-approved technology should not result in penalties.
Statutory Text
(b) It is the public policy of the State of California that a worker providing direct patient care should not be penalized for relying in good faith on technology that the licensed health care professional's employer has selected or approved for their use in patient care. (d) An employer shall not retaliate or discriminate against a worker providing direct patient care based on both of the following: (1) The worker's override of, or request to override, the output of technology if, in the judgment of the worker acting in their scope of practice, such an override is appropriate for the patient, or as necessary to comply with applicable law, including civil rights law. (2) The worker's compliance with the output of technology if the technology was provided or approved by the worker's employer for patient care. (e) A worker who is subject to retaliation or discrimination in violation of this article has the right under this article to file a complaint with the Labor Commissioner against an employer who retaliates or discriminates against the worker.
Other · Healthcare
Civil Code § 1714.48(b)-(c)
Plain Language
Defendants who developed, modified, selected, or deployed AI or clinical decision support systems in patient care may not argue that a healthcare worker's failure to override the tool's output constitutes a superseding cause that severs the defendant's liability. This prevents AI developers and deployers from shifting blame to the clinician who relied on the AI's recommendation. The provision preserves all other affirmative defenses — including causation, foreseeability, and comparative fault arguments — so defendants can still contest liability on other grounds. This is significant for product counsel because it means the 'learned intermediary' style defense is unavailable specifically for the failure-to-override theory.
Statutory Text
(b) In an action against a defendant who developed, modified, selected, or deployed artificial intelligence or a clinical decision support system that is alleged to have caused harm to the plaintiff, it shall not be a defense, and the defendant may not assert, that the failure of a licensed health care professional or other health care worker to override an output of the artificial intelligence or clinical decision support system is a superseding cause severing the defendant's liability for the alleged harm. (c) This section does not limit or preclude a defendant from presenting either of the following: (1) Any other affirmative defense, including evidence relevant to causation or foreseeability. (2) Other evidence relevant to the comparative fault of any other person or entity.