SB-420
CA · State · USA
CA
USA
● Pending
Proposed Effective Date
2026-01-01
California SB 420 — Automated Decision Systems (Adding Chapter 24.6 to Division 8 of the Business and Professions Code and Article 11 to Chapter 1 of Part 2 of Division 2 of the Public Contract Code)
Regulates developers and deployers of high-risk automated decision systems used in consequential decisions affecting education, employment, housing, healthcare, lending, legal rights, essential utilities, and government services. Requires developers and deployers to perform impact assessments before making systems publicly available or deploying them, and to maintain governance programs with safeguards against algorithmic discrimination. Deployers must notify affected individuals about automated decisions and provide an appeal mechanism for human review. Prohibits deployment of systems found likely to result in algorithmic discrimination unless safeguards are implemented. Enforcement is by the Attorney General or Civil Rights Department with a 45-day cure period; civil penalties range from $2,500 to $25,000 per violation depending on entity size and violation type. Entities with 50 or fewer employees are exempt.
Summary

Regulates developers and deployers of high-risk automated decision systems used in consequential decisions affecting education, employment, housing, healthcare, lending, legal rights, essential utilities, and government services. Requires developers and deployers to perform impact assessments before making systems publicly available or deploying them, and to maintain governance programs with safeguards against algorithmic discrimination. Deployers must notify affected individuals about automated decisions and provide an appeal mechanism for human review. Prohibits deployment of systems found likely to result in algorithmic discrimination unless safeguards are implemented. Enforcement is by the Attorney General or Civil Rights Department with a 45-day cure period; civil penalties range from $2,500 to $25,000 per violation depending on entity size and violation type. Entities with 50 or fewer employees are exempt.

Enforcement & Penalties
Enforcement Authority
Attorney General or Civil Rights Department may bring a civil action against a deployer or developer for a violation. Before commencing an action, the Attorney General or Civil Rights Department must provide 45 days' written notice of the alleged violation. The developer or deployer may cure the noticed violation within 45 days and provide an express written statement under penalty of perjury that the violation has been cured; if cured, the action shall not be maintained. No private right of action is created.
Penalties
For failure to conduct an impact assessment: $2,500 civil penalty for defendants with fewer than 100 employees, $5,000 for fewer than 500 employees, $10,000 for 500 or more employees. If the violation is intentional, an additional $500 per day of noncompliance. For violations concerning algorithmic discrimination: $25,000 per violation. Injunctive relief and reasonable attorney's fees and costs are also available.
Who Is Covered
"Developer" means a natural person or entity that designs, codes, produces, or substantially modifies a high-risk automated decision system for use in the state.
"Deployer" means a natural person or entity that uses a high-risk automated decision system in the state.
What Is Covered
"Automated decision system" means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons. "Automated decision system" does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.
"High-risk automated decision system" means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, any of the following: (A) Education enrollment or opportunity. (B) Employment or employment opportunity. (C) Essential utilities. (D) Temporary, short-term, or long-term housing. (E) Health care services. (F) Lending services. (G) A legal right or service. (H) An essential government service. "High-risk automated decision system" does not include an automated decision system that only performs narrow procedural tasks, enhances human activities, detects patterns without influencing decisions, or assists in preparatory tasks for assessment.
Compliance Obligations 13 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.3 · Developer · Automated Decisionmaking
Bus. & Prof. Code § 22756.1(a)(1)-(2), (c)(2)
Plain Language
Developers must complete a formal impact assessment before making any high-risk automated decision system publicly available on or after January 1, 2026. For systems already on the market before that date, the impact assessment obligation triggers only upon a substantial modification. The assessment must cover the system's purpose and intended uses, its outputs, data inputs, foreseeable discriminatory impacts on protected classifications, safeguards against algorithmic discrimination, and monitoring guidance for deployers. This assessment must be retained and is subject to regulatory request under § 22756.6.
Statutory Text
(a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use. (2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system. (c) (2) An impact assessment prepared pursuant to this section shall include all of the following: (A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts. (B) A description of the high-risk automated decision system's intended outputs. (C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system. (D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system. (E) A developer's impact assessment shall also include both of the following: (i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system. (ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.
H-02 Non-Discrimination & Bias Assessment · H-02.3 · DeployerGovernment · Automated Decisionmaking
Bus. & Prof. Code § 22756.1(b), (c)(2)(F)-(H)
Plain Language
Deployers of high-risk automated decision systems first deployed after January 1, 2026, must perform an impact assessment within two years of deployment. The deployer's assessment must address how its use aligns with or deviates from the developer's intended uses, describe safeguards against discrimination, and explain monitoring and evaluation plans. State agencies may opt out of performing their own impact assessment if four conditions are met: (1) the system is used only as the developer intended, (2) no substantial modifications are made, (3) the developer complies with the procurement and confidential impact assessment requirements, (4) the agency has no reasonable basis to believe the system is likely to cause algorithmic discrimination, and the agency maintains its own governance program under § 22756.3.
Statutory Text
(b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system. (2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met: (A) The state agency does not make a substantial modification to the high-risk automated decision system. (B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d). (C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination. (D) The state agency is in compliance with Section 22756.3. (c) (2) An impact assessment prepared pursuant to this section shall include all of the following: (F) A statement of the extent to which the deployer's use of the high-risk automated decision system is consistent with, or varies from, the developer's statement of the high-risk automated decision system's purpose and intended benefits, intended uses, and intended deployment contexts. (G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system. (H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Bus. & Prof. Code § 22756.1(c)(1)
Plain Language
Developers must make the content of their impact assessment available to current and prospective deployers. This is a downstream disclosure obligation — distinct from the confidential submission to regulators under § 22756.6 — ensuring that entities considering deploying a high-risk system can review the developer's assessment of purpose, intended uses, data inputs, foreseeable discriminatory impacts, safeguards, and monitoring guidance before making a deployment decision.
Statutory Text
(c) (1) A developer shall make available to deployers and potential deployers the statements included in the developer's impact assessment pursuant to paragraph (2).
H-01 Human Oversight of Automated Decisions · H-01.1H-01.3 · Deployer · Automated Decisionmaking
Bus. & Prof. Code § 22756.2(a)
Plain Language
When a deployer uses a high-risk automated decision system to make a decision about an individual, the deployer must notify that person and disclose: the system's purpose and the specific decision made, how the system was used in the decision, the types of data the system used, the deployer's contact information, and a link to the deployer's public website statement about its automated decision systems. This is a post-decision notification and explanation obligation — the statute does not explicitly require pre-decision notice, but it does require disclosure of the specific decision made.
Statutory Text
(a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following: (1) The purpose of the high-risk automated decision system and the specific decision it was used to make. (2) How the high-risk automated decision system was used to make the decision. (3) The type of data used by the high-risk automated decision system. (4) Contact information for the deployer. (5) A link to the statement required by subdivision (b).
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Bus. & Prof. Code § 22756.2(b)
Plain Language
Deployers must publish and maintain on their website a public summary disclosing which types of high-risk automated decision systems they currently deploy, how they manage known or foreseeable risks of algorithmic discrimination, and the nature and source of information the systems collect and use. This is a standing public disclosure obligation — not a one-time filing — and must be kept current as deployed systems change.
Statutory Text
(b) A deployer shall make available on its internet website a statement summarizing all of the following: (1) The types of high-risk automated decision systems it currently deploys. (2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems. (3) The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer.
H-01 Human Oversight of Automated Decisions · H-01.4 · Deployer · Automated Decisionmaking
Bus. & Prof. Code § 22756.2(c)
Plain Language
Deployers must provide individuals who are subject to a decision made by a high-risk automated decision system with an opportunity to appeal the decision for human review. This obligation is conditioned on technical feasibility, which provides some flexibility but does not eliminate the requirement where human review is practicable. The statute does not specify a timeframe for the appeal or the qualifications of the human reviewer.
Statutory Text
(c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person.
G-01 AI Governance Program & Documentation · G-01.1 · DeveloperDeployer · Automated Decisionmaking
Bus. & Prof. Code § 22756.3(a)-(b)
Plain Language
Both developers and deployers must establish, document, implement, and maintain a formal governance program with reasonable administrative and technical safeguards against algorithmic discrimination risks. The program must be proportionate to the system's intended use, the entity's size and resources, the nature and scope of activities, and the technical feasibility and cost of available risk management tools. This is a continuing obligation — the program must be maintained, not merely created — and applies to each high-risk automated decision system in use or intended for use.
Statutory Text
(a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system. (b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following: (1) The use, or intended use, of the high-risk automated decision system. (2) The size, complexity, and resources of the deployer or developer. (3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system. (4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system.
H-02 Non-Discrimination & Bias Assessment · H-02.3 · DeveloperDeployer · Automated Decisionmaking
Bus. & Prof. Code § 22756.5(a)-(b)
Plain Language
Developers and deployers are prohibited from deploying or making available a high-risk automated decision system if the impact assessment finds the system is likely to cause algorithmic discrimination. However, an exception exists: deployment is permitted if the entity implements safeguards to mitigate the discrimination risks and then performs an updated impact assessment verifying that the algorithmic discrimination has been mitigated and is not reasonably likely to occur. This effectively creates a deploy-with-safeguards pathway conditioned on a second, confirmatory impact assessment.
Statutory Text
(a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination. (b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated Decisionmaking
Bus. & Prof. Code § 22756.6(a)
Plain Language
Developers must provide a copy of their impact assessment to the Attorney General or Civil Rights Department within 30 days of a request. The submitted impact assessment must be kept confidential by the receiving agency. This is a responsive submission obligation — developers are not required to proactively submit but must be able to produce the assessment on a 30-day turnaround. Note this obligation applies only to developers; the statute does not expressly require deployers to submit their assessments to the AG or CRD upon request.
Statutory Text
(a) (1) A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter. (2) Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential.
PS-01 Government AI Accountability · PS-01.2 · Government · Automated DecisionmakingGovernment System
Bus. & Prof. Code § 22756.1(d)
Plain Language
State agencies deploying high-risk automated decision systems must require the developer to provide a copy of the developer's impact assessment. The state agency must keep the impact assessment confidential. This creates a procurement-adjacent obligation: state agencies cannot deploy these systems without first obtaining and retaining the developer's impact assessment, which functions as a pre-deployment documentation requirement for government use of AI.
Statutory Text
(d) (1) A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section. (2) Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential.
PS-01 Government AI Accountability · PS-01.4 · GovernmentDeveloper · Automated DecisionmakingGovernment System
Pub. Contract Code § 10285.8(a)-(b)
Plain Language
State agencies are prohibited from awarding a contract for a high-risk automated decision system to any vendor that has violated the Unruh Civil Rights Act, the California Fair Employment and Housing Act, or this bill's automated decision system requirements. This functions as a procurement debarment provision — vendors with civil rights or AI compliance violations are ineligible for state AI contracts. The practical compliance burden falls on both the state agency (which must verify vendor compliance) and the vendor (which must maintain a clean compliance record to remain eligible).
Statutory Text
(a) A state agency shall not award a contract for a high-risk automated decision system to a person who has violated any of the following: (1) The Unruh Civil Rights Act (Section 51 of the Civil Code). (2) The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code). (3) Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code. (b) As used in this section, "high-risk automated decision system" has the same meaning as defined in Section 22756 of the Business and Professions Code.
Other · Automated Decisionmaking
Bus. & Prof. Code § 22756.4
Plain Language
Developers and deployers are not required to disclose information under this chapter if doing so would waive a legal privilege or reveal a trade secret. This is a safe harbor that limits all disclosure obligations in the chapter — including impact assessment content shared with deployers, individual notifications, and public website statements — but does not eliminate the underlying obligation to perform the assessment or maintain the governance program. The trade secret definition incorporates Civil Code § 3426.1 (California's Uniform Trade Secrets Act).
Statutory Text
A developer or deployer is not required to disclose information under this chapter if the disclosure of that information would result in the waiver of a legal privilege or the disclosure of a trade secret, as defined in Section 3426.1 of the Civil Code.
Other · Automated Decisionmaking
Bus. & Prof. Code § 22756.7
Plain Language
Two exemptions apply to the entire chapter. First, entities with 50 or fewer employees are fully exempt from all obligations. Second, high-risk automated decision systems that have been approved, certified, or cleared by a federal agency and that comply with another law that is substantially the same or more stringent than this chapter are exempt. The federal-regulation exemption effectively creates a federal-floor preemption carve-out — if a system is already subject to equivalent or stricter federal oversight, state obligations do not apply.
Statutory Text
This chapter does not apply to either of the following: (a) An entity with 50 or fewer employees. (b) A high-risk automated decision system that has been approved, certified, or cleared by a federal agency that complies with another law that is substantially the same or more stringent than this chapter.