SB-420
CA · State · USA
CA
USA
● Pending
Proposed Effective Date
2026-01-01
California SB 420 — Automated Decision Systems (2025–2026 Regular Session)
CA SB 420 regulates developers and deployers of high-risk automated decision systems — systems that materially affect access to employment, education, housing, healthcare, lending, legal rights, essential utilities, or government services. Core obligations include conducting impact assessments before deployment or public availability, establishing governance programs to mitigate algorithmic discrimination risks, notifying affected individuals and disclosing system details, providing an appeal mechanism for human review, and restricting deployment when an impact assessment identifies likely algorithmic discrimination. Enforcement is through civil actions by the Attorney General or Civil Rights Department, with a 45-day cure period. Entities with 50 or fewer employees are exempt, as are systems approved by a federal agency under substantially equivalent or more stringent law. The bill also bars state agencies from awarding contracts to developers who have violated civil rights laws or this chapter.
Summary

CA SB 420 regulates developers and deployers of high-risk automated decision systems — systems that materially affect access to employment, education, housing, healthcare, lending, legal rights, essential utilities, or government services. Core obligations include conducting impact assessments before deployment or public availability, establishing governance programs to mitigate algorithmic discrimination risks, notifying affected individuals and disclosing system details, providing an appeal mechanism for human review, and restricting deployment when an impact assessment identifies likely algorithmic discrimination. Enforcement is through civil actions by the Attorney General or Civil Rights Department, with a 45-day cure period. Entities with 50 or fewer employees are exempt, as are systems approved by a federal agency under substantially equivalent or more stringent law. The bill also bars state agencies from awarding contracts to developers who have violated civil rights laws or this chapter.

Enforcement & Penalties
Enforcement Authority
The Attorney General or the Civil Rights Department may bring a civil action against a deployer or developer for a violation of this chapter. Before commencing an action, the Attorney General or Civil Rights Department must provide 45 days' written notice of the alleged violation. The developer or deployer may cure the noticed violation within 45 days and provide an express written statement under penalty of perjury that the violation has been cured; if cured, the action shall not be maintained for the noticed violation. No private right of action is created.
Penalties
Tiered civil penalties for failure to conduct an impact assessment: $2,500 for defendants with fewer than 100 employees, $5,000 for fewer than 500 employees, and $10,000 for at least 500 employees. If a violation is intentional, the civil penalty increases by $500 per day of noncompliance. For violations concerning algorithmic discrimination, a civil penalty of $25,000 per violation. Injunctive relief and reasonable attorney's fees and costs are also available.
Who Is Covered
"Developer" means a natural person or entity that designs, codes, produces, or substantially modifies a high-risk automated decision system for use in the state.
"Deployer" means a natural person or entity that uses a high-risk automated decision system in the state.
What Is Covered
"Automated decision system" means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons. "Automated decision system" does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.
"High-risk automated decision system" means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, any of the following: (A) Education enrollment or opportunity. (B) Employment or employment opportunity. (C) Essential utilities. (D) Temporary, short-term, or long-term housing. (E) Health care services. (F) Lending services. (G) A legal right or service. (H) An essential government service. "High-risk automated decision system" does not include an automated decision system that only performs narrow procedural tasks, enhances human activities, detects patterns without influencing decisions, or assists in preparatory tasks for assessment.
Compliance Obligations 13 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.3 · Developer · Automated Decisionmaking
Bus. & Prof. Code § 22756.1(a)(1)-(2), (c)(2)(A)-(E)
Plain Language
Developers must complete an impact assessment before making a high-risk automated decision system publicly available (for systems available on or after January 1, 2026) or upon making a substantial modification (for systems available before that date). The impact assessment must cover the system's purpose, intended uses, intended outputs, data inputs, foreseeable disproportionate impacts on protected classifications, safeguards against algorithmic discrimination, and monitoring guidance for deployers. Developers must make the impact assessment statements available to deployers and potential deployers.
Statutory Text
(a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use. (2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system. (c) (1) A developer shall make available to deployers and potential deployers the statements included in the developer's impact assessment pursuant to paragraph (2). (2) An impact assessment prepared pursuant to this section shall include all of the following: (A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts. (B) A description of the high-risk automated decision system's intended outputs. (C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system. (D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system. (E) A developer's impact assessment shall also include both of the following: (i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system. (ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.
H-02 Non-Discrimination & Bias Assessment · H-02.3 · DeployerGovernment · Automated Decisionmaking
Bus. & Prof. Code § 22756.1(b)(1)-(2), (c)(2)(F)-(H)
Plain Language
Deployers must perform an impact assessment within two years of deploying a high-risk automated decision system first deployed after January 1, 2026. The deployer's impact assessment must address how the deployer's use aligns with or deviates from the developer's intended uses, what safeguards the deployer has implemented against discrimination risks, and how the system is and will be monitored. State agencies that are deployers may opt out of performing their own impact assessment if they use the system only for its intended purpose, the developer complies with applicable procurement and impact assessment requirements, the state agency has no reasonable basis to believe algorithmic discrimination is likely, and the state agency maintains a governance program under § 22756.3.
Statutory Text
(b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system. (2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met: (A) The state agency does not make a substantial modification to the high-risk automated decision system. (B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d). (C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination. (D) The state agency is in compliance with Section 22756.3. (c) (2) An impact assessment prepared pursuant to this section shall include all of the following: (F) A statement of the extent to which the deployer's use of the high-risk automated decision system is consistent with, or varies from, the developer's statement of the high-risk automated decision system's purpose and intended benefits, intended uses, and intended deployment contexts. (G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system. (H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.
PS-01 Government AI Accountability · PS-01.4 · Government · Automated DecisionmakingGovernment System
Bus. & Prof. Code § 22756.1(d)(1)-(2)
Plain Language
State agencies deploying a high-risk automated decision system must require the developer to provide a copy of the impact assessment. The state agency must keep the impact assessment confidential. This creates a procurement-adjacent obligation: state agencies cannot simply deploy a high-risk system without first obtaining and retaining the developer's impact assessment. The confidentiality protection is absolute — it overrides other California disclosure laws.
Statutory Text
(d) (1) A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section. (2) Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.3 · Deployer · Automated Decisionmaking
Bus. & Prof. Code § 22756.2(a)(1)-(5)
Plain Language
When a deployer uses a high-risk automated decision system to make a decision about a person, the deployer must notify that person and provide specific disclosures: the system's purpose and the specific decision made, how the system was used, the types of data used, the deployer's contact information, and a link to the deployer's public summary statement. This is a post-decision notification — it does not require pre-decision notice, but it does require disclosure of how and why the system was used in the specific decision affecting that individual.
Statutory Text
(a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following: (1) The purpose of the high-risk automated decision system and the specific decision it was used to make. (2) How the high-risk automated decision system was used to make the decision. (3) The type of data used by the high-risk automated decision system. (4) Contact information for the deployer. (5) A link to the statement required by subdivision (b).
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Bus. & Prof. Code § 22756.2(b)(1)-(3)
Plain Language
Deployers must publish a summary statement on their website describing the types of high-risk automated decision systems they deploy, how they manage algorithmic discrimination risks, and the nature and source of data used by those systems. This is a standing public transparency obligation — it must be maintained on the deployer's website and kept current as systems change.
Statutory Text
(b) A deployer shall make available on its internet website a statement summarizing all of the following: (1) The types of high-risk automated decision systems it currently deploys. (2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems. (3) The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer.
H-01 Human Oversight of Automated Decisions · H-01.4 · Deployer · Automated Decisionmaking
Bus. & Prof. Code § 22756.2(c)
Plain Language
Deployers must provide affected individuals an opportunity to appeal decisions made by a high-risk automated decision system for human review, to the extent technically feasible. The 'technically feasible' qualifier gives deployers some flexibility, but the baseline obligation is to offer a human review appeal process. The statute does not prescribe the format, timeline, or substantive standard for the appeal.
Statutory Text
(c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person.
G-01 AI Governance Program & Documentation · G-01.1 · DeveloperDeployer · Automated Decisionmaking
Bus. & Prof. Code § 22756.3(a)-(b)
Plain Language
Both developers and deployers must establish, document, implement, and maintain a formal governance program with reasonable administrative and technical safeguards to address foreseeable risks of algorithmic discrimination. The program must be proportionate to the system's intended use, the entity's size and complexity, the nature of its activities, and the technical feasibility and cost of available risk management tools. This is a continuing obligation — the program must be maintained, not merely established once.
Statutory Text
(a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system. (b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following: (1) The use, or intended use, of the high-risk automated decision system. (2) The size, complexity, and resources of the deployer or developer. (3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system. (4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system.
H-02 Non-Discrimination & Bias Assessment · DeveloperDeployer · Automated Decisionmaking
Bus. & Prof. Code § 22756.5(a)-(b)
Plain Language
Developers and deployers are prohibited from deploying or making available a high-risk automated decision system when their impact assessment determines the system is likely to produce algorithmic discrimination. An exception exists: deployment is permitted if the entity implements safeguards to mitigate the known discrimination risks and then performs an updated impact assessment confirming that algorithmic discrimination has been mitigated and is not reasonably likely to occur. This creates a deployment gate tied to impact assessment outcomes — systems flagged for likely discrimination cannot ship without remediation and re-assessment.
Statutory Text
(a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination. (b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated Decisionmaking
Bus. & Prof. Code § 22756.6(a)(1)-(2)
Plain Language
Developers must produce a copy of their impact assessment to the Attorney General or the Civil Rights Department within 30 days of a request. The impact assessment is treated as confidential regardless of other California disclosure laws. Note the 30-day response window is shorter than the 90-day default in many other jurisdictions. This applies only to developers — the statute does not impose a parallel production obligation on deployers to these regulators.
Statutory Text
(a) (1) A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter. (2) Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential.
Other · Automated Decisionmaking
Bus. & Prof. Code § 22756.4
Plain Language
Developers and deployers are not required to disclose information under this chapter if doing so would waive a legal privilege or reveal a trade secret. This is a safe harbor limiting the scope of all disclosure obligations in the chapter — it does not create a new affirmative compliance obligation.
Statutory Text
A developer or deployer is not required to disclose information under this chapter if the disclosure of that information would result in the waiver of a legal privilege or the disclosure of a trade secret, as defined in Section 3426.1 of the Civil Code.
Other · Automated Decisionmaking
Bus. & Prof. Code § 22756.6(b)-(c)
Plain Language
This provision establishes enforcement authority and remedies for violations of the chapter. The Attorney General or Civil Rights Department may bring civil actions with tiered penalties and injunctive relief. A 45-day cure period applies before an action can be maintained. This is an enforcement hook — it activates a penalty framework but does not itself create an independent compliance obligation.
Statutory Text
(b) The Attorney General or the Civil Rights Department may bring a civil action against a deployer or developer for a violation of this chapter and obtain any of the following relief: (1) (A) If a developer or deployer fails to conduct an impact assessment as required under this chapter, a civil penalty of two thousand five hundred dollars ($2,500) for a defendant with fewer than 100 employees, five thousand dollars ($5,000) if the defendant has fewer than 500 employees, and ten thousand dollars ($10,000) if the defendant has at least 500 employees. (B) If a violation is intentional, the civil penalty pursuant to this paragraph shall increase by five hundred dollars ($500) for each day that the defendant is noncompliant. (2) Injunctive relief. (3) Reasonable attorney's fees and costs. (4) If the violation concerns algorithmic discrimination, a civil penalty of twenty-five thousand dollars ($25,000) per violation. (c) (1) Before commencing an action pursuant to this section, the Attorney General or the Civil Rights Department shall provide 45 days' written notice to a deployer or developer of any alleged violation of this chapter. (2) (A) The developer or deployer may cure, within 45 days of receiving the written notice described in paragraph (1), the noticed violation and provide an express written statement, made under penalty of perjury, that the violation has been cured. (B) If the developer or deployer cures the noticed violation and provides the express written statement pursuant to subparagraph (A), an action shall not be maintained for the noticed violation.
Other · Automated Decisionmaking
Bus. & Prof. Code § 22756.7(a)-(b)
Plain Language
The entire chapter is inapplicable to entities with 50 or fewer employees and to high-risk automated decision systems that have been approved, certified, or cleared by a federal agency under a law substantially the same or more stringent than this chapter. These are scope exclusions that narrow which entities and systems are covered — they create no new obligation.
Statutory Text
This chapter does not apply to either of the following: (a) An entity with 50 or fewer employees. (b) A high-risk automated decision system that has been approved, certified, or cleared by a federal agency that complies with another law that is substantially the same or more stringent than this chapter.
PS-01 Government AI Accountability · PS-01.4 · Government · Automated DecisionmakingGovernment System
Pub. Contract Code § 10285.8(a)-(b)
Plain Language
State agencies are prohibited from awarding contracts for high-risk automated decision systems to any person who has violated the Unruh Civil Rights Act, the California Fair Employment and Housing Act, or this chapter (Chapter 24.6 of the Business and Professions Code). This creates a procurement disqualification tied to civil rights and AI compliance history — vendors with prior violations of these laws are ineligible. The statute does not specify a lookback period or rehabilitation process.
Statutory Text
(a) A state agency shall not award a contract for a high-risk automated decision system to a person who has violated any of the following: (1) The Unruh Civil Rights Act (Section 51 of the Civil Code). (2) The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code). (3) Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code. (b) As used in this section, "high-risk automated decision system" has the same meaning as defined in Section 22756 of the Business and Professions Code.