SB-2203
IL · State · USA
IL
USA
● Pending
Proposed Effective Date
2027-01-01
Illinois SB 2203 — Preventing Algorithmic Discrimination Act
Creates the Preventing Algorithmic Discrimination Act, imposing obligations on deployers of automated decision tools used to make or control consequential decisions across employment, education, housing, healthcare, financial services, criminal justice, and other high-stakes domains. Deployers must conduct annual impact assessments analyzing discriminatory risk, submit those assessments to the Attorney General within 60 days of completion, establish and maintain a governance program with a designated responsible employee, publish a public policy summarizing their AI tools and discrimination risk management, and provide pre-decision notice to affected individuals with an opt-out right from solely automated decisions. Prohibits use of automated decision tools that result in algorithmic discrimination. Enforced by the Attorney General under the Consumer Fraud and Deceptive Business Practices Act, with a private right of action for algorithmic discrimination claims (requiring proof of actual harm) available beginning January 1, 2028. Small deployer exemption applies to entities with fewer than 25 employees unless their tool impacted more than 999 people in the prior year.
Summary

Creates the Preventing Algorithmic Discrimination Act, imposing obligations on deployers of automated decision tools used to make or control consequential decisions across employment, education, housing, healthcare, financial services, criminal justice, and other high-stakes domains. Deployers must conduct annual impact assessments analyzing discriminatory risk, submit those assessments to the Attorney General within 60 days of completion, establish and maintain a governance program with a designated responsible employee, publish a public policy summarizing their AI tools and discrimination risk management, and provide pre-decision notice to affected individuals with an opt-out right from solely automated decisions. Prohibits use of automated decision tools that result in algorithmic discrimination. Enforced by the Attorney General under the Consumer Fraud and Deceptive Business Practices Act, with a private right of action for algorithmic discrimination claims (requiring proof of actual harm) available beginning January 1, 2028. Small deployer exemption applies to entities with fewer than 25 employees unless their tool impacted more than 999 people in the prior year.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement under the Consumer Fraud and Deceptive Business Practices Act (815 ILCS 505). All remedies, penalties, and authority granted to the Attorney General by the Consumer Fraud and Deceptive Business Practices Act are available for enforcement. Administrative enforcement actions may be brought by the Attorney General for knowing violations of the impact assessment submission requirement. Private right of action available only for violations of Section 30 (algorithmic discrimination), effective January 1, 2028. Plaintiff must demonstrate actual harm caused by the algorithmic discrimination.
Penalties
For Section 30 violations (algorithmic discrimination): compensatory damages, declaratory relief, and reasonable attorney's fees and costs. Plaintiff must prove actual harm. For Section 35 violations (failure to submit impact assessment to AG): administrative fine of up to $10,000 per violation, with each day the tool is used without a submitted assessment constituting a distinct violation. All remedies and penalties available under the Consumer Fraud and Deceptive Business Practices Act also apply.
Who Is Covered
"Deployer" means a person, partnership, State or local government agency, or corporation that uses an automated decision tool to make a consequential decision.
What Is Covered
"Automated decision tool" means a system or service that uses artificial intelligence and has been specifically developed and marketed to, or specifically modified to, make, or be a controlling factor in making, consequential decisions.
Compliance Obligations 7 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.4 · Deployer · Automated Decisionmaking
Section 10(a)-(c); Section 35(a)-(c)
Plain Language
By January 1, 2027, and annually thereafter, deployers must complete a formal impact assessment for each automated decision tool they use. The assessment must cover the tool's purpose, outputs, data types collected, an analysis of potential adverse impacts across protected characteristics, safeguards against algorithmic discrimination, human oversight mechanisms, and validity evaluation. A new impact assessment must also be performed as soon as feasible following any significant update. Within 60 days of completing each assessment, the deployer must submit it to the Attorney General. Knowing failure to submit triggers administrative fines of up to $10,000 per violation, with each day the tool is used without a submitted assessment counting as a separate violation. Deployers with fewer than 25 employees are exempt unless their tool impacted more than 999 people in the prior calendar year.
Statutory Text
(a) On or before January 1, 2027, and annually thereafter, a deployer of an automated decision tool shall perform an impact assessment for any automated decision tool the deployer uses that includes all of the following: (1) a statement of the purpose of the automated decision tool and its intended benefits, uses, and deployment contexts; (2) a description of the automated decision tool's outputs and how they are used to make, or be a controlling factor in making, a consequential decision; (3) a summary of the type of data collected from natural persons and processed by the automated decision tool when it is used to make, or be a controlling factor in making, a consequential decision; (4) an analysis of potential adverse impacts on the basis of sex, race, color, ethnicity, religion, age, national origin, limited English proficiency, disability, veteran status, or genetic information from the deployer's use of the automated decision tool; (5) a description of the safeguards implemented, or that will be implemented, by the deployer to address any reasonably foreseeable risks of algorithmic discrimination arising from the use of the automated decision tool known to the deployer at the time of the impact assessment; (6) a description of how the automated decision tool will be used by a natural person, or monitored when it is used, to make, or be a controlling factor in making, a consequential decision; and (7) a description of how the automated decision tool has been or will be evaluated for validity or relevance. (b) A deployer shall, in addition to the impact assessment required by subsection (a), perform, as soon as feasible, an impact assessment with respect to any significant update. (c) This Section does not apply to a deployer with fewer than 25 employees unless, as of the end of the prior calendar year, the deployer deployed an automated decision tool that impacted more than 999 people per year. Section 35. (a) Within 60 days after completing an impact assessment required by this Act, a deployer shall provide the impact assessment to the Attorney General. (b) A deployer who knowingly violates this Section shall be liable for an administrative fine of not more than $10,000 per violation in an administrative enforcement action brought by the Attorney General. Each day on which an automated decision tool is used for which an impact assessment has not been submitted as required under this Section shall give rise to a distinct violation of this Section. (c) The Attorney General may share impact assessments with other State entities as appropriate.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
Section 15(a)
Plain Language
Before or at the time an automated decision tool is used to make a consequential decision, the deployer must notify the affected individual that an automated tool is being used. The notification must include: the tool's purpose, the deployer's contact information, and a plain-language description of how the automated and human components work together to inform the decision. This is a broad pre-decision notice requirement covering all consequential decision domains — employment, education, housing, healthcare, financial services, criminal justice, and more.
Statutory Text
(a) A deployer shall, at or before the time an automated decision tool is used to make a consequential decision, notify any natural person who is the subject of the consequential decision that an automated decision tool is being used to make, or be a controlling factor in making, the consequential decision. A deployer shall provide to a natural person notified under this subsection all of the following: (1) a statement of the purpose of the automated decision tool; (2) the contact information for the deployer; and (3) a plain language description of the automated decision tool that includes a description of any human components and how any automated component is used to inform a consequential decision.
H-01 Human Oversight of Automated Decisions · H-01.4 · Deployer · Automated Decisionmaking
Section 15(b)
Plain Language
When a consequential decision is made solely by an automated decision tool — with no human involvement — the deployer must, if technically feasible, honor a person's request to opt out of the automated process and be subject to an alternative selection process or accommodation. The deployer may request identifying information to locate the person and the relevant decision; if the person declines to provide that information, the opt-out obligation does not apply. Note the two conditions: (1) the decision must be made solely by the tool, and (2) the alternative must be technically feasible. Decisions where a human plays any role do not trigger this opt-out right.
Statutory Text
(b) If a consequential decision is made solely based on the output of an automated decision tool, a deployer shall, if technically feasible, accommodate a natural person's request to not be subject to the automated decision tool and to be subject to an alternative selection process or accommodation. After a request is made under this subsection, a deployer may reasonably request, collect, and process information from a natural person for the purposes of identifying the person and the associated consequential decision. If the person does not provide that information, the deployer shall not be obligated to provide an alternative selection process or accommodation.
G-01 AI Governance Program & Documentation · G-01.1G-01.2G-01.3G-01.6 · Deployer · Automated Decisionmaking
Section 20(a)-(d)
Plain Language
Deployers must establish, document, implement, and maintain a governance program with reasonable administrative and technical safeguards to manage the risks of algorithmic discrimination from their automated decision tools. The safeguards must be proportionate to the tool's use, the deployer's size and resources, and the technical feasibility of available risk management tools. The program must include: risk identification and safeguard implementation, integration with the impact assessment process, an annual comprehensive compliance review, retention of impact assessment results for at least two years after completion, and ongoing adjustments in response to material changes in technology or operations. At least one designated employee must be responsible for overseeing the program and compliance. That employee has the authority to raise compliance concerns in good faith, and the employer must promptly and completely assess any such concern. Deployers with fewer than 25 employees are exempt unless their tool impacted more than 999 people in the prior calendar year.
Statutory Text
(a) A deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to map, measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination associated with the use or intended use of an automated decision tool. The safeguards required by this subsection shall be appropriate to all of the following: (1) the use or intended use of the automated decision tool; (2) the deployer's role as a deployer; (3) the size, complexity, and resources of the deployer; (4) the nature, context, and scope of the activities of the deployer in connection with the automated decision tool; and (5) the technical feasibility and cost of available tools, assessments, and other means used by a deployer to map, measure, manage, and govern the risks associated with an automated decision tool. (b) The governance program required by this Section shall be designed to do all of the following: (1) identify and implement safeguards to address reasonably foreseeable risks of algorithmic discrimination resulting from the use or intended use of an automated decision tool; (2) if established by a deployer, provide for the performance of impact assessments as required by Section 10; (3) conduct an annual and comprehensive review of policies, practices, and procedures to ensure compliance with this Act; (4) maintain for 2 years after completion the results of an impact assessment; and (5) evaluate and make reasonable adjustments to administrative and technical safeguards in light of material changes in technology, the risks associated with the automated decision tool, the state of technical standards, and changes in business arrangements or operations of the deployer. (c) A deployer shall designate at least one employee to be responsible for overseeing and maintaining the governance program and compliance with this Act. An employee designated under this subsection shall have the authority to assert to the employee's employer a good faith belief that the design, production, or use of an automated decision tool fails to comply with the requirements of this Act. An employer of an employee designated under this subsection shall conduct a prompt and complete assessment of any compliance issue raised by that employee. (d) This Section does not apply to a deployer with fewer than 25 employees unless, as of the end of the prior calendar year, the deployer deployed an automated decision tool that impacted more than 999 people per year.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Section 25
Plain Language
Deployers must publish a clear, readily accessible public policy summarizing: (1) the types of automated decision tools they currently use or make available, and (2) how they manage the foreseeable risks of algorithmic discrimination arising from those tools. This is a standalone public transparency requirement — distinct from the impact assessment, which is submitted to the Attorney General. The policy must be kept current (it covers tools 'currently in use') and must be accessible to the general public, not just regulators.
Statutory Text
A deployer shall make publicly available, in a readily accessible manner, a clear policy that provides a summary of both of the following: (1) the types of automated decision tools currently in use or made available to others by the deployer; and (2) how the deployer manages the reasonably foreseeable risks of algorithmic discrimination that may arise from the use of the automated decision tools it currently uses or makes available to others.
H-02 Non-Discrimination & Bias Assessment · Deployer · Automated Decisionmaking
Section 30(a)-(c)
Plain Language
Deployers are prohibited from using an automated decision tool that results in algorithmic discrimination — unjustified differential treatment or disparate impacts based on protected characteristics. Beginning January 1, 2028, individuals may bring a private civil action for violations. The plaintiff bears the burden of proving that the tool resulted in algorithmic discrimination and caused actual harm. Available remedies include compensatory damages, declaratory relief, and reasonable attorney's fees and costs. Two carve-outs apply: (1) use of the tool solely for self-testing to identify or prevent discrimination, and (2) acts by private clubs not open to the public under the Civil Rights Act of 1964.
Statutory Text
(a) A deployer shall not use an automated decision tool that results in algorithmic discrimination. (b) On and after January 1, 2028, a person may bring a civil action against a deployer for violation of this Section. In an action brought under this subsection, the plaintiff shall have the burden of proof to demonstrate that the deployer's use of the automated decision tool resulted in algorithmic discrimination that caused actual harm to the person bringing the civil action. (c) In addition to any other remedy at law, a deployer that violates this Section shall be liable to a prevailing plaintiff for any of the following: (1) compensatory damages; (2) declaratory relief; and (3) reasonable attorney's fees and costs.
Other · Automated Decisionmaking
Section 40; 815 ILCS 505/2HHHH
Plain Language
Violations of the Preventing Algorithmic Discrimination Act are treated as unlawful practices under Illinois's Consumer Fraud and Deceptive Business Practices Act. This gives the Attorney General access to all CFDBPA remedies, penalties, and enforcement authority — including civil penalties, injunctive relief, and investigative subpoena power — for enforcing the Act. This provision establishes the enforcement channel but does not create a new independent compliance obligation.
Statutory Text
A violation of this Act constitutes an unlawful practice under the Consumer Fraud and Deceptive Business Practices Act. All remedies, penalties, and authority granted to the Attorney General by the Consumer Fraud and Deceptive Business Practices Act shall be available to him or her for the enforcement of this Act. Sec. 2HHHH. Violations of the Preventing Algorithmic Discrimination Act. A person who violates the Preventing Algorithmic Discrimination Act commits an unlawful practice within the meaning of this Act.