S-1802
NJ · State · USA
NJ
USA
● Pre-filed
Proposed Effective Date
2026-07-01
New Jersey Senate No. 1802 — An Act concerning artificial intelligence safety tests and supplementing Title 52 of the Revised Statutes
Requires all artificial intelligence companies — defined broadly as any private entity or public agency that sells, develops, deploys, uses, or offers for sale AI technology in New Jersey — to annually conduct safety tests on their AI technologies and submit reports with results to the Office of Information Technology. The Office of Information Technology must establish minimum safety test requirements covering cybersecurity threats, data source bias, inaccuracies, and potential legal violations, and must review each annual report. The bill does not specify any enforcement mechanism, penalties, or private right of action for noncompliance. The bill takes effect on the first day of the sixth month after enactment.
Summary

Requires all artificial intelligence companies — defined broadly as any private entity or public agency that sells, develops, deploys, uses, or offers for sale AI technology in New Jersey — to annually conduct safety tests on their AI technologies and submit reports with results to the Office of Information Technology. The Office of Information Technology must establish minimum safety test requirements covering cybersecurity threats, data source bias, inaccuracies, and potential legal violations, and must review each annual report. The bill does not specify any enforcement mechanism, penalties, or private right of action for noncompliance. The bill takes effect on the first day of the sixth month after enactment.

Enforcement & Penalties
Enforcement Authority
The Office of Information Technology is designated to establish minimum safety test requirements and review annual reports submitted by artificial intelligence companies. The bill does not specify any enforcement mechanism, penalties, or private right of action.
Penalties
The bill does not specify any penalties, damages, or remedies for noncompliance.
Who Is Covered
"Artificial intelligence company" means a private entity or public agency that sells, develops, deploys, uses, or offers for sale in this State artificial intelligence technology.
What Is Covered
"Artificial intelligence technology" means a product or service that primarily uses artificial intelligence to perform its intended function.
Compliance Obligations 3 obligations · click obligation ID to open requirement page
S-01 AI System Safety Program · S-01.1S-01.5 · Government · General Consumer App
Section 1(b)(1)(a)-(c)
Plain Language
The Office of Information Technology must establish minimum requirements for AI safety tests that all AI companies must follow. The required safety test must include at minimum: analysis of cybersecurity threats and vulnerabilities, analysis of data sources for bias, inaccuracies, and potential legal violations (criminal, copyright, patent, or trade secret), and descriptions of remedial or defensive measures the company can take to address identified issues. This provision obligates the government agency to create the testing framework; the corresponding company obligation to actually conduct the tests is in subsection c.
Statutory Text
The Office of Information Technology shall: (1) establish minimum requirements for an artificial intelligence safety test for artificial intelligence technology sold, developed, deployed, used, or offered for sale in this State that is conducted by an artificial intelligence company pursuant to subsection c. of this section, which requirements shall include but not be limited to: (a) an analysis of potential cybersecurity threats and vulnerabilities; (b) an analysis of an artificial intelligence technology's data sources and potential sources of bias, incorrect or inaccurate information, or violations of State or federal criminal, copyright, patent, or trade secret laws; and (c) descriptions of possible remedies or defensive measures that can be taken by the artificial intelligence company to address all potential cybersecurity threats and vulnerabilities, potential sources of bias, incorrect or inaccurate information, or potential violations of State or federal criminal, copyright, patent, or trade secret laws identified during the conducting of the safety test
R-02 Regulatory Disclosure & Submissions · R-02.1 · DeveloperDeployerGovernment · General Consumer App
Section 1(c)(1)-(4)
Plain Language
Every artificial intelligence company must annually conduct safety tests on all AI technology it sells, develops, deploys, uses, or offers for sale in New Jersey, following the minimum requirements established by the Office of Information Technology. The company must then submit an annual report to OIT containing: a list of all AI technologies tested, a description of each safety test conducted and how it adheres to OIT's requirements, a list of any third parties used to conduct the tests, and the results of each test. The scope is extremely broad — it covers any private entity or public agency with any connection to AI technology in New Jersey, including entities that merely use AI technology. The bill does not specify penalties for failure to test or report.
Statutory Text
An artificial intelligence company shall annually subject all artificial intelligence technology sold, developed, deployed, used, or offered for sale in this State to a safety test that adheres to the requirements established pursuant to subsection b. of this section and submit a report to the Office of Information Technology containing: (1) a list of all artificial intelligence technologies tested; (2) a description of each safety test conducted, including the safety test's adherence to the requirements established pursuant to subsection b. of this section; (3) a list of all third parties used to conduct safety tests, if any; and (4) the results of each safety test administered.
S-01 AI System Safety Program · S-01.4 · Government · General Consumer App
Section 1(b)(2)
Plain Language
The Office of Information Technology is required to review every annual safety test report submitted by AI companies. This creates a regulatory review obligation on OIT, ensuring that submitted reports are not merely filed but actually examined. The bill does not specify what actions OIT must take if a report reveals deficiencies or what standard of review applies.
Statutory Text
The Office of Information Technology shall: (2) review each annual report required to be submitted by an artificial intelligence company pursuant to subsection c. of this section.