SB-7543B
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2025-01-01
New York Senate Bill 7543-B — Legislative Oversight of Automated Decision-Making in Government Act (LOADinG Act)
Requires New York state agencies to subject all automated decision-making systems used in public assistance benefits, rights-affecting functions, or functions materially impacting individual welfare to continued and operational meaningful human review. Agencies must conduct impact assessments before deploying such systems and at least every two years thereafter, covering accuracy, bias, cybersecurity, safety risks, data use, and notification mechanisms. Impact assessments must be submitted to the governor and legislature at least 30 days before implementation and published on the agency's website, with limited redaction authority. If an assessment finds discriminatory or biased outcomes, the agency must immediately cease use of the system. Existing systems must be disclosed to the legislature within one year. The bill also protects state employee collective bargaining rights from displacement by automated systems.
Summary

Requires New York state agencies to subject all automated decision-making systems used in public assistance benefits, rights-affecting functions, or functions materially impacting individual welfare to continued and operational meaningful human review. Agencies must conduct impact assessments before deploying such systems and at least every two years thereafter, covering accuracy, bias, cybersecurity, safety risks, data use, and notification mechanisms. Impact assessments must be submitted to the governor and legislature at least 30 days before implementation and published on the agency's website, with limited redaction authority. If an assessment finds discriminatory or biased outcomes, the agency must immediately cease use of the system. Existing systems must be disclosed to the legislature within one year. The bill also protects state employee collective bargaining rights from displacement by automated systems.

Enforcement & Penalties
Enforcement Authority
No private right of action or designated enforcement agency. Obligations are imposed directly on state agencies. Impact assessments must be submitted to the governor, temporary president of the senate, and speaker of the assembly at least 30 days before implementation. Legislative oversight is the primary accountability mechanism. If an impact assessment finds discriminatory or biased outcomes, the agency must cease use of the system — this is a mandatory self-executing obligation rather than an external enforcement trigger.
Penalties
No monetary penalties, civil penalties, or damages provisions are specified. The sole statutory remedy for a finding of discriminatory or biased outcomes is mandatory cessation of the automated decision-making system by the agency.
Who Is Covered
"State agency" shall mean any department, public authority, board, bureau, commission, division, office, council, committee or officer of the state. Such terms shall not include the legislature or judiciary.
What Is Covered
"Automated decision-making system" shall mean any software that uses algorithms, computational models, or artificial intelligence techniques, or a combination thereof, to automate, support, or replace human decision-making and shall include, without limitation, systems that process data, and apply predefined rules or machine learning algorithms to analyze such data, and generate conclusions, recommendations, outcomes, assumptions, projections, or predictions without meaningful human discretion. "Automated decision-making system" shall not include any software used primarily for basic computerized processes, such as calculators, spellcheck tools, autocorrect functions, spreadsheets, electronic communications, or any tool that relates only to internal management affairs such as ordering office supplies or processing payments, and that do not materially affect the rights, liberties, benefits, safety or welfare of any individual within the state.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
H-01 Human Oversight of Automated Decisions · H-01.6 · Government · Government SystemAutomated Decisionmaking
State Technology Law § 402(1)
Plain Language
State agencies may not use any automated decision-making system — directly or through contractors — for public assistance benefits, rights-affecting functions, or functions materially impacting individual welfare unless the system is subject to continued and operational meaningful human review. The human reviewer must understand the system's risks and limitations, be trained on it, and have actual authority to approve, deny, or modify the system's decisions. This is not a one-time pre-launch check; the meaningful human review must be ongoing and operational throughout deployment.
Statutory Text
No state agency, or any entity acting on behalf of such agency, which utilizes or applies any automated decision-making system, directly or indirectly, in performing any function that: (a) is related to the delivery of any public assistance benefit; (b) will have a material impact on the rights, civil liberties, safety or welfare of any individual within the state; or (c) affects any statutorily or constitutionally provided right of an individual, shall utilize such automated decision-making system, unless such automated decision-making system is subject to continued and operational meaningful human review.
PS-01 Government AI Accountability · PS-01.4 · Government · Government SystemAutomated Decisionmaking
State Technology Law § 402(2)
Plain Language
State agencies may not procure, purchase, or acquire any AI-powered service or system for use in public assistance, rights-affecting, or welfare-impacting functions unless the system supports continued and operational meaningful human review. This creates a procurement gate: vendors selling automated decision-making systems to New York state agencies must ensure their systems are architecturally capable of supporting ongoing human oversight, and agencies must verify this capability before acquisition.
Statutory Text
No state agency shall authorize any procurement, purchase or acquisition of any service or system utilizing, or relying on, automated decision-making systems in performing any function that is: (a) related to the delivery of any public assistance benefit; (b) will have a material impact on the rights, civil liberties, safety or welfare of any individual within the state; or (c) affects any statutorily or constitutionally provided right of an individual unless such automated decision-making system is subject to continued and operational meaningful human review.
Other · Government SystemAutomated Decisionmaking
State Technology Law § 402(3)
Plain Language
Agencies may not use automated decision-making systems in a way that displaces state employees, reduces their hours or benefits, transfers their duties to an automated system, or impairs collective bargaining agreements. All existing employment rights, civil service status, and bargaining unit membership must be preserved. This is a labor protection provision — it creates no AI-specific compliance obligation fitting the taxonomy but is significant for agencies planning AI deployments that could affect staffing.
Statutory Text
The use of an automated decision-making system shall not affect (a) the existing rights of employees pursuant to an existing collective bargaining agreement, or (b) the existing representational relationships among employee organizations or the bargaining relationships between the employer and an employee organization. The use of an automated decision-making system shall not result in the: (1) discharge, displacement or loss of position, including partial displacement such as a reduction in the hours of non-overtime work, wages, or employment benefits, or result in the impairment of existing collective bargaining agreements; (2) transfer of existing duties and functions currently performed by employees of the state or any agency or public authority thereof to an automated decision-making system; or (3) transfer of future duties and functions ordinarily performed by employees of the state or any agency or public authority. The use of an automated decision-making system shall not alter the rights or benefits, and privileges, including but not limited to terms and conditions of employment, civil service status, and collective bargaining unit membership status of all existing employees of the state or any agency or public authority thereof shall be preserved and protected.
PS-01 Government AI Accountability · PS-01.2 · Government · Government SystemAutomated Decisionmaking
State Technology Law § 403(1)(a)-(f)
Plain Language
Before deploying any automated decision-making system, state agencies must conduct a comprehensive impact assessment signed by the individual(s) responsible for meaningful human review. The assessment must cover system objectives, effectiveness evaluation, technical description (algorithms, training data), bias and discrimination testing across an extensive list of protected characteristics, cybersecurity and privacy risks, public health and safety risks, foreseeable misuse, data handling practices, and notification mechanisms for affected individuals. After the initial assessment, agencies must conduct reassessments at least every two years and before any material change that could alter the system's outcomes. This is among the most detailed government AI impact assessment requirements in U.S. state legislation.
Statutory Text
State agencies seeking to utilize or apply an automated decision-making system permitted under section four hundred two of this article with continued and operational meaningful human review shall conduct or have conducted an impact assessment substantially completed and bearing the signature of one or more individuals responsible for meaningful human review for the lawful application and use of such automated decision-making system. Following the first impact assessment, an impact assessment shall be conducted in accordance with this section at least once every two years. An impact assessment shall be conducted prior to any material change to the automated decision-making system that may change the outcome or effect of such system. Such impact assessments shall include: (a) a description of the objectives of the automated decision-making system; (b) an evaluation of the ability of the automated decision-making system to achieve its stated objectives; (c) a description and evaluation of the objectives and development of the automated decision-making including: (i) a summary of the underlying algorithms, computational modes, and artificial intelligence tools that are used within the automated decision-making system; and (ii) the design and training data used to develop the automated decision-making system process; (d) testing for: (i) accuracy, fairness, bias and discrimination, and an assessment of whether the use of the automated decision-making system produces discriminatory results on the basis of a consumer's or a class of consumers' actual or perceived race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability and outlines mitigations for any identified performance differences in outcomes across relevant groups impacted by such use; (ii) any cybersecurity vulnerabilities and privacy risks resulting from the deployment and use of the automated decision-making system, and the development or existence of safeguards to mitigate the risks; (iii) any public health or safety risks resulting from the deployment and use of the automated decision-making system; (iv) any reasonably foreseeable misuse of the automated decision-making system and the development or existence of safeguards against such misuse; (e) the extent to which the deployment and use of the automated decision-making system requires input of sensitive and personal data, how that data is used and stored, and any control users may have over their data; and (f) the notification mechanism or procedure, if any, by which individuals impacted by the utilization of the automated decision-making system may be notified of the use of such automated decision-making system and of the individual's personal data, and informed of their rights and options relating to such use.
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · Government · Government SystemAutomated Decisionmaking
State Technology Law § 403(2)
Plain Language
If a required impact assessment finds that an automated decision-making system produces discriminatory or biased outcomes, the agency must immediately cease all use of that system — including ceasing reliance on any information the system has already produced. This is a mandatory shutdown requirement with no cure period or remediation option: the statute says 'shall cease,' not 'shall mitigate.' The prohibition extends to derivative outputs (information produced using the system), which means agencies cannot continue using conclusions or recommendations the biased system previously generated.
Statutory Text
Notwithstanding the provisions of this article or any other law, if an impact assessment finds that the automated decision-making system produces discriminatory or biased outcomes, the state agency shall cease any utilization, application, or function of such automated decision-making system, and of any information produced using such system.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Government · Government SystemAutomated Decisionmaking
State Technology Law § 404(1)
Plain Language
Every impact assessment must be submitted to the governor and legislative leaders at least 30 days before the agency implements the automated decision-making system covered by the assessment. This creates a mandatory waiting period: agencies cannot deploy a system until 30 days after the governor and legislature receive the completed assessment. While the statute does not explicitly grant the governor or legislature veto power, the 30-day window provides an opportunity for legislative intervention before deployment.
Statutory Text
Each impact assessment conducted pursuant to this article shall be submitted to the governor, the temporary president of the senate, and the speaker of the assembly at least thirty days prior to the implementation of the automated decision-making system that is the subject of such assessment.
PS-01 Government AI Accountability · PS-01.3 · Government · Government SystemAutomated Decisionmaking
State Technology Law § 404(2)(a)-(c)
Plain Language
Agencies must publish each impact assessment on their website, making it publicly accessible. Two narrow redaction exceptions exist: (1) information whose disclosure would substantially harm public health or safety, infringe individual privacy, or significantly impair IT or operational security; and (2) information about systems used for security, fraud detection, identity theft prevention, or law enforcement functions. In both cases, the agency must publish the redacted assessment along with an explanatory statement describing the process by which it determined redaction was warranted — the redaction authority is not a blanket exemption from publication.
Statutory Text
(a) The impact assessment of an automated decision-making system shall be published on the website of the relevant state agency. (b) If the state agency makes a determination that the disclosure of any information required in the impact assessment would result in a substantial negative impact on health or safety of the public, infringe upon the privacy rights of individuals, or significantly impair the state agency's ability to protect its information technology or operational assets, such state agency may redact such information, provided that an explanatory statement on the process by which the state agency made such determination is published along with the redacted impact assessment. (c) If the impact assessment covers any automated decision-making system that includes technology that is used to prevent, detect, protect against or respond to security incidents, identity theft, fraud, harassment, malicious or deceptive activities or other illegal activity, preserve the integrity or security of systems, or to investigate, report or prosecute those responsible for any such malicious or deceptive action, such state agency may redact such information for the purposes of this subdivision, provided that an explanatory statement on the process by which the state agency made such determination is published along with the redacted impact assessment.
PS-01 Government AI Accountability · PS-01.1 · Government · Government SystemAutomated Decisionmaking
§ 3(a)-(f)
Plain Language
Within one year of the act's effective date, every state agency currently using an automated decision-making system must submit a disclosure to the legislature cataloguing that system. The disclosure must include a system description, vendor list, start date, purpose summary (including what human decision-making it supports or replaces), whether impact assessments were conducted and their results, and any other relevant information. This is a retroactive inventory requirement for existing systems — it applies to systems already deployed, not just future ones, and serves as a baseline for legislative oversight. This provision takes effect immediately upon enactment, while the substantive Article IV requirements take effect one year later.
Statutory Text
Any state agency, that directly or indirectly, utilizes an automated decision-making system, as defined in section 401 of the state technology law, shall submit to the legislature a disclosure on the use of such system, no later than one year after the effective date of this section. Such disclosure shall include: (a) a description of the automated decision-making system utilized by such agency; (b) a list of any software vendors related to such automated decision-making system; (c) the date that the use of such system began; (d) a summary of the purpose and use of such system, including a description of human decision-making and discretion supported or replaced by the automated decision-making system; (e) whether any impact assessments for the automated decision-making system were conducted and the dates and summaries of the results of such assessments where applicable; and (f) any other information deemed relevant by the agency.