SB-503
CA · State · USA
CA
USA
● Failed
Effective Date
2026-01-01
California SB 503 — Health care services: artificial intelligence.
SB 503 would have required the Department of Health Care Access and Information and the Department of Technology to establish an advisory board on the use of AI in health care services. The advisory board would develop best practices for health care AI use, create a standardized testing system for bias in AI models, and establish a statewide certification program. Developers of AI models or systems used in health facilities, clinics, and physician offices would be required to test for biased impacts based on the facility's patient population. No enforcement mechanism, penalties, or private right of action is specified. The bill was ordered to inactive file and did not advance.
Summary

SB 503 would have required the Department of Health Care Access and Information and the Department of Technology to establish an advisory board on the use of AI in health care services. The advisory board would develop best practices for health care AI use, create a standardized testing system for bias in AI models, and establish a statewide certification program. Developers of AI models or systems used in health facilities, clinics, and physician offices would be required to test for biased impacts based on the facility's patient population. No enforcement mechanism, penalties, or private right of action is specified. The bill was ordered to inactive file and did not advance.

Enforcement & Penalties
Enforcement Authority
No enforcement mechanism specified. The bill establishes an advisory board under the Department of Health Care Access and Information and the Department of Technology but does not designate any enforcement authority, penalty structure, or complaint-driven process for violations of the bias testing requirement.
Penalties
No damages, penalties, or remedies are specified in the bill.
Who Is Covered
"Developer" means a person, partnership, state or local governmental agency, corporation, or deployer that designs, codes, substantially modifies, or otherwise produces an AI model or AI system that generates an output that can influence physical or virtual environments.
"deployer" means a person, partnership, state or local governmental agency, corporation, or developer that uses a covered AI model or AI system that generates an output that can influence physical or virtual environments.
A person, partnership, state or local governmental agency, or corporation may be a developer and a deployer if the person or entity both designs, codes, substantially modifies, or otherwise produces an AI model or AI system that generates an output that can influence physical or virtual environments and also uses a covered AI model or AI system that generates an output that can influence physical or virtual environments.
Compliance Obligations 2 obligations · click obligation ID to open requirement page
Other · Government · Healthcare
Health & Safety Code § 1339.76(a)-(b)
Plain Language
The Department of Health Care Access and Information and the Department of Technology must jointly establish an advisory board focused on AI in healthcare. The board must develop best practices for healthcare AI use, create a standardized bias testing system, and establish a statewide certification program for AI models and systems that pass bias testing. This is an institutional-creation mandate directed at state agencies rather than a compliance obligation on private entities, so no taxonomy requirement applies.
Statutory Text
(a) The Department of Health Care Access and Information and the Department of Technology shall establish an advisory board related to the use of artificial intelligence (AI) in health care services. (b) The advisory board shall do all of the following: (1) Develop best practices for the use of AI models or AI systems by a health facility, clinic, physician's office, or office of a group practice that uses AI in its provision of health care services. (2) Develop a standardized testing system with criteria for developers to test AI models or AI systems for biased impacts. (3) Establish a statewide certificate that can be used to confirm that a developer's version or release of their AI model or AI system meets standards set by the advisory board pursuant to the testing for biased impacts described in paragraph (2).
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Developer · Healthcare
Health & Safety Code § 1339.76(c)(1)-(3)
Plain Language
Developers of AI models or systems used in healthcare settings must, working together with the health facilities, clinics, physician's offices, or group practices that use them, test for biased impacts in the AI system's outputs. Testing must account for the specific patient population of the health facility. Until the advisory board develops its own standardized testing system, developers must use an existing testing system designated by the board. Once the board's system is available, developers may optionally use it and may obtain a statewide certification confirming their AI model or system meets the board's bias standards. The testing obligation is mandatory; the certification is voluntary.
Statutory Text
(c) (1) Developers of AI models or AI systems, in conjunction with health facilities, clinics, physician's offices, or offices of a group practice, shall test for biased impacts in the outputs produced by the specified AI model or AI system based on the health facility's patient population. (2) Developers shall use an existing testing system designated by the advisory board until the advisory board has developed its standardized testing system described in paragraph (2) of subdivision (b). After the advisory board has developed its testing system, developers may alternatively use the board's testing system. (3) After the advisory board has created the certification described in paragraph (3) of subdivision (b), developers may use the advisory board's standardized testing system to certify their AI models or AI systems.