S-1802
NJ · State · USA
NJ
USA
● Pre-filed
Proposed Effective Date
2026-07-01
New Jersey Senate No. 1802 — An Act concerning artificial intelligence safety tests and supplementing Title 52 of the Revised Statutes
NJ S 1802 requires the Office of Information Technology to establish minimum requirements for AI safety tests covering cybersecurity threats, bias, inaccuracies, and potential legal violations. Any private entity or public agency that sells, develops, deploys, uses, or offers for sale AI technology in New Jersey must annually conduct safety tests meeting those requirements and submit reports to OIT detailing all technologies tested, test descriptions, third parties used, and results. The bill does not specify any enforcement mechanism, penalties, or private right of action. OIT must adopt implementing regulations in consultation with the Chief Innovation Officer and the Director of the NJ Office of Homeland Security and Preparedness.
Summary

NJ S 1802 requires the Office of Information Technology to establish minimum requirements for AI safety tests covering cybersecurity threats, bias, inaccuracies, and potential legal violations. Any private entity or public agency that sells, develops, deploys, uses, or offers for sale AI technology in New Jersey must annually conduct safety tests meeting those requirements and submit reports to OIT detailing all technologies tested, test descriptions, third parties used, and results. The bill does not specify any enforcement mechanism, penalties, or private right of action. OIT must adopt implementing regulations in consultation with the Chief Innovation Officer and the Director of the NJ Office of Homeland Security and Preparedness.

Enforcement & Penalties
Enforcement Authority
The Office of Information Technology (OIT) is designated to establish safety test requirements and review annual reports. The bill does not specify any enforcement mechanism, penalties, or private right of action. OIT's role is limited to rulemaking and report review.
Penalties
The bill does not specify any penalties, damages, fines, or other monetary or non-monetary remedies for non-compliance.
Who Is Covered
"Artificial intelligence company" means a private entity or public agency that sells, develops, deploys, uses, or offers for sale in this State artificial intelligence technology.
What Is Covered
"Artificial intelligence technology" means a product or service that primarily uses artificial intelligence to perform its intended function.
Compliance Obligations 4 obligations · click obligation ID to open requirement page
S-01 AI System Safety Program · S-01.1 · Government · General Consumer App
Section 1(b)(1)(a)-(c)
Plain Language
The Office of Information Technology must establish minimum requirements for AI safety tests. At a minimum, safety tests must include: an analysis of cybersecurity threats and vulnerabilities; an analysis of data sources and potential sources of bias, inaccuracy, or legal violations (criminal, copyright, patent, trade secret); and descriptions of remedies or defensive measures to address identified issues. This provision obligates OIT to create the testing framework — the corresponding obligation on AI companies to actually conduct these tests is in Section 1(c).
Statutory Text
The Office of Information Technology shall: (1) establish minimum requirements for an artificial intelligence safety test for artificial intelligence technology sold, developed, deployed, used, or offered for sale in this State that is conducted by an artificial intelligence company pursuant to subsection c. of this section, which requirements shall include but not be limited to: (a) an analysis of potential cybersecurity threats and vulnerabilities; (b) an analysis of an artificial intelligence technology's data sources and potential sources of bias, incorrect or inaccurate information, or violations of State or federal criminal, copyright, patent, or trade secret laws; and (c) descriptions of possible remedies or defensive measures that can be taken by the artificial intelligence company to address all potential cybersecurity threats and vulnerabilities, potential sources of bias, incorrect or inaccurate information, or potential violations of State or federal criminal, copyright, patent, or trade secret laws identified during the conducting of the safety test
S-01 AI System Safety Program · S-01.1 · DeveloperDeployer · General Consumer App
Section 1(c)(1)-(4)
Plain Language
Every AI company (broadly defined to include any private entity or public agency that sells, develops, deploys, uses, or offers AI technology for sale in New Jersey) must annually conduct safety tests on all of its AI technologies. The tests must meet OIT's minimum requirements (covering cybersecurity, bias, inaccuracy, and legal compliance). This is both a testing obligation and a reporting obligation — after conducting the tests, the company must submit a report to OIT listing all technologies tested, describing each test and its adherence to OIT requirements, identifying any third parties used, and providing results. The annual cadence means this is a recurring obligation, not a one-time pre-deployment assessment.
Statutory Text
An artificial intelligence company shall annually subject all artificial intelligence technology sold, developed, deployed, used, or offered for sale in this State to a safety test that adheres to the requirements established pursuant to subsection b. of this section and submit a report to the Office of Information Technology containing: (1) a list of all artificial intelligence technologies tested; (2) a description of each safety test conducted, including the safety test's adherence to the requirements established pursuant to subsection b. of this section; (3) a list of all third parties used to conduct safety tests, if any; and (4) the results of each safety test administered.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Government · General Consumer App
Section 1(b)(2), Section 1(c)
Plain Language
OIT is required to review each annual safety test report submitted by AI companies. From the AI company's perspective, this confirms that the annual report is not merely filed and forgotten — OIT has an affirmative obligation to review each submission. While the primary reporting obligation is in Section 1(c), this provision establishes that OIT will actively review the submissions, creating an implicit expectation that reports must be substantive and complete enough to withstand regulatory scrutiny.
Statutory Text
The Office of Information Technology shall: (2) review each annual report required to be submitted by an artificial intelligence company pursuant to subsection c. of this section.
Other · General Consumer App
Section 2
Plain Language
OIT must adopt implementing regulations through the state Administrative Procedure Act, in consultation with the Chief Innovation Officer and the Director of the NJ Office of Homeland Security and Preparedness. This is a standard rulemaking delegation provision and creates no independent compliance obligation for AI companies — the substantive obligations will flow from the rules OIT ultimately adopts.
Statutory Text
The Office of Information Technology shall, in consultation with the Chief Innovation Officer and the Director of the New Jersey Office of Homeland Security and Preparedness, adopt, pursuant to the "Administrative Procedure Act," P.L.1968, c.410 (C.52:14B-1 et seq.), rules and regulations as may be necessary to implement the provisions of P.L. , c. (C. ) (pending before the Legislature as this bill).