SB-503
CA · State · USA
CA
USA
● Pending
Proposed Effective Date
2026-01-01
California SB 503 — Health care services: artificial intelligence
Directs the Department of Health Care Access and Information and the Department of Technology to establish an advisory board on AI use in healthcare services. The advisory board must develop best practices for AI use by health facilities, clinics, and physician's offices, create a standardized testing system for evaluating AI models and systems for biased impacts across protected characteristics, and establish a statewide certification program. Developers of AI models or systems used in healthcare settings are required to test for biased impacts based on each health facility's patient population, initially using an existing testing system designated by the advisory board. No enforcement mechanism, penalties, or private right of action are specified.
Summary

Directs the Department of Health Care Access and Information and the Department of Technology to establish an advisory board on AI use in healthcare services. The advisory board must develop best practices for AI use by health facilities, clinics, and physician's offices, create a standardized testing system for evaluating AI models and systems for biased impacts across protected characteristics, and establish a statewide certification program. Developers of AI models or systems used in healthcare settings are required to test for biased impacts based on each health facility's patient population, initially using an existing testing system designated by the advisory board. No enforcement mechanism, penalties, or private right of action are specified.

Enforcement & Penalties
Enforcement Authority
No enforcement mechanism specified in the bill text. The Department of Health Care Access and Information and the Department of Technology are directed to establish an advisory board, but no enforcement authority, penalty provisions, or compliance mechanisms are granted to any agency or private party.
Penalties
No damages, penalties, or remedies are specified in the bill text.
Who Is Covered
"Developer" means a person, partnership, state or local governmental agency, corporation, or deployer that designs, codes, substantially modifies, or otherwise produces an AI model or AI system that generates an output that can influence physical or virtual environments.
A "deployer" means a person, partnership, state or local governmental agency, corporation, or developer that uses a covered AI model or AI system that generates an output that can influence physical or virtual environments.
A person, partnership, state or local governmental agency, or corporation may be a developer and a deployer if the person or entity both designs, codes, substantially modifies, or otherwise produces an AI model or AI system that generates an output that can influence physical or virtual environments and also uses a covered AI model or AI system that generates an output that can influence physical or virtual environments.
Compliance Obligations 3 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Developer · Healthcare
Health & Safety Code § 1339.76(c)(1)-(2)
Plain Language
Developers of AI models or systems used in healthcare settings must test their systems for biased impacts — meaning unintended impacts on individuals based on protected characteristics — in the outputs produced by the AI, using the patient population of the specific health facility, clinic, physician's office, or group practice where the AI is deployed. Testing must be conducted in conjunction with the health facility. Until the advisory board develops its standardized testing system, developers must use an existing testing system designated by the board; once the board's system is available, developers may choose to use it instead.
Statutory Text
(c) (1) Developers of AI models or AI systems, in conjunction with health facilities, clinics, physician's offices, or offices of a group practice, shall test for biased impacts in the outputs produced by the specified AI model or AI system based on the health facility's patient population. (2) Developers shall use an existing testing system designated by the advisory board until the advisory board has developed its standardized testing system described in paragraph (2) of subdivision (b). After the advisory board has developed its testing system, developers may alternatively use the board's testing system.
Other · Government · Healthcare
Health & Safety Code § 1339.76(a)-(b)(1)-(3)
Plain Language
The Department of Health Care Access and Information and the Department of Technology must establish an advisory board on AI in healthcare. The board must develop best practices for AI use in health facilities, create a standardized bias testing system with criteria for developers to test AI models and systems, and establish a statewide certification confirming a developer's AI model or system meets the board's bias testing standards. These are directives to state agencies rather than obligations on private entities.
Statutory Text
(a) The Department of Health Care Access and Information and the Department of Technology shall establish an advisory board related to the use of artificial intelligence (AI) in health care services. (b) The advisory board shall do all of the following: (1) Develop best practices for the use of AI models or AI systems by a health facility, clinic, physician's office, or office of a group practice that uses AI in its provision of health care services. (2) Develop a standardized testing system with criteria for developers to test AI models or AI systems for biased impacts. (3) Establish a statewide certificate that can be used to confirm that a developer's version or release of their AI model or AI system meets standards set by the advisory board pursuant to the testing for biased impacts described in paragraph (2).
Other · Healthcare
Health & Safety Code § 1339.76(c)(3)
Plain Language
Once the advisory board establishes its statewide certification program, developers may voluntarily use the board's standardized testing system to obtain certification that their AI models or systems meet the board's bias testing standards. This is permissive — developers are not required to obtain certification, but it provides an optional pathway to demonstrate compliance with the board's standards.
Statutory Text
(3) After the advisory board has created the certification described in paragraph (3) of subdivision (b), developers may use the advisory board's standardized testing system to certify their AI models or AI systems.