A-03356
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2025-07-26
New York Assembly Bill 3356 — Advanced Artificial Intelligence Licensing Act
Establishes a comprehensive licensing and registration regime for high-risk advanced artificial intelligence systems in New York, administered by the Department of State and the Secretary of State. Requires any person developing or operating a high-risk AI system to register with the Secretary and obtain a license before deployment, with pre-approval required for source code modifications, upgrades, and rewrites. Mandates creation of independent ethics and risk management boards for each operator, annual comprehensive risk assessment reports, automatic operation logging with 10-year retention, internal kill-switch controls, and incident reporting. Establishes a binding ethical code of conduct covering principles of respect, equity, accountability, care, trust, inclusivity, oversight, notice, and safety. Categorically prohibits certain AI applications including subliminal manipulation, autonomous weapons without human control, and predictive behavioral systems that infringe on individual liberty. Enforcement is exclusively governmental through the Department of State and Attorney General, with civil penalties, criminal penalties up to class C felony, license revocation, and injunctive relief. No private right of action is created.
Summary

Establishes a comprehensive licensing and registration regime for high-risk advanced artificial intelligence systems in New York, administered by the Department of State and the Secretary of State. Requires any person developing or operating a high-risk AI system to register with the Secretary and obtain a license before deployment, with pre-approval required for source code modifications, upgrades, and rewrites. Mandates creation of independent ethics and risk management boards for each operator, annual comprehensive risk assessment reports, automatic operation logging with 10-year retention, internal kill-switch controls, and incident reporting. Establishes a binding ethical code of conduct covering principles of respect, equity, accountability, care, trust, inclusivity, oversight, notice, and safety. Categorically prohibits certain AI applications including subliminal manipulation, autonomous weapons without human control, and predictive behavioral systems that infringe on individual liberty. Enforcement is exclusively governmental through the Department of State and Attorney General, with civil penalties, criminal penalties up to class C felony, license revocation, and injunctive relief. No private right of action is created.

Enforcement & Penalties
Enforcement Authority
Department of State, with the Secretary of State administering licensing, investigations, examinations, and enforcement actions. The Attorney General brings civil actions and injunctive relief proceedings at the request of the Department. The Department may impose civil and criminal penalties, revoke or suspend licenses, issue summary suspensions, order administrative seizure of services, and issue stop orders. No private right of action is created. Investigators appointed by the Department are designated peace officers for enforcement purposes.
Penalties
Civil penalty of not to exceed the amount gained from the violation, or the actual damages caused by the violation, whichever is greater. Penalty must be proportionate to the violation. Equitable and injunctive relief available. For prohibited AI systems, civil penalty of the amount earned from creation of the prohibited system or the amount of damages caused by the system, whichever is greater. Criminal penalties include: misdemeanor for false statements (up to $500 fine and/or 6 months imprisonment); misdemeanor for ethics board false statements (up to $500 fine and/or 6 months imprisonment); class A misdemeanor for negligent uncontainment; class E felony for willful uncontainment; class C felony for willful or negligent uncontainment of financial systems or prohibited systems; class D felony for knowingly operating a prohibited system. Examination costs assessed against licensee.
Who Is Covered
"Operator" shall mean the person who distributes and has control over the development of a high-risk advanced artificial intelligence system. Where a high-risk advanced artificial intelligence system is publicly accessible code, the operator shall be deemed the platform or platforms which host the system.
"Person" shall mean any individual, group of individuals, partnership, corporation, association or any other entity.
What Is Covered
"Advanced artificial intelligence system" shall mean any digital application or software, whether or not integrated with physical hardware, that autonomously performs functions traditionally requiring human intelligence. This includes, but is not limited to the system: (a) Having the ability to learn from and adapt to new data or situations autonomously; or (b) Having the ability to perform functions that require cognitive processes such as understanding, learning, or decision-making for each specific task.
"High-risk advanced artificial intelligence system" shall mean any advanced artificial intelligence system that possesses capabilities that can cause significant harm to the liberty, emotional, psychological, financial, physical, or privacy interests of an individual or groups of individuals, or which have significant implications on governance, infrastructure, or the environment. The director shall assess any such public or private system in determining whether such system requires registration. High-risk advanced artificial intelligence systems shall, at least, include systems that are designed to, whether directly or indirectly, on purpose or without purpose, do the following: (a) Cause material harm to persons, wildlife, or the environment; (b) Manage, control, or significantly influence healthcare or healthcare-related systems, including but not limited to, diagnosis, treatment plans, pharmaceutical recommendation, or storing of patient records; (c) Operate, control, or guide motor vehicles, aircraft, or any other forms of transport which, if it were to malfunction, has a high probability of posing a risk to human safety or environmental integrity; (d) Psychologically profile individuals for the purpose of targeted advertising, behavioral prediction, or the manipulation of user experiences and interactions in products or services; (e) Manage, control, or create critical infrastructure, including but not limited to the supply of water, electricity, gas, and heating, or construction; (f) Facilitate, control, or significantly impact financial systems, including but not limited to control of stock exchanges, stock trading, credit scoring, or other activities where inaccuracies or failures could lead to substantial economic harm for individuals or broader financial instability; (g) Assist, replace, or augment human decision-making in law enforcement, the judiciary, the executive, the legislature, or any government agency; (h) Enable advanced surveillance capabilities; (i) Involve the use or development of autonomous weapons systems that can cause harm, destruction, or engage in conflict without meaningful human intervention; and (j) Decode or interpret neural or cognitive activity.
Compliance Obligations 20 obligations · click obligation ID to open requirement page
R-02 Regulatory Disclosure & Submissions · R-02.3 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 510(1)-(3)
Plain Language
Any person developing or operating a high-risk AI system in New York must register the system with the Secretary of State by applying for a license. Registration is triggered by active deployment and covers all updates, modifications, and capability expansions. For autonomous weapons systems specifically (§ 501(2)(i)), pre-development written disclosure is required before active development begins. The Secretary may order cessation of development or public access pending classification review, and determinations of high-risk status are made through formal public hearings. The registration duty applies to systems that more likely than not qualify as high-risk, with the Secretary empowered to proactively identify unregistered systems.
Statutory Text
§ 510. Duty to register a high-risk advanced artificial intelligence system. 1. Any person who develops a high-risk advanced artificial intelligence system, whether in whole or in part, in the state that is presently performing functions for its intended purpose or within its designated operational parameters, shall have the duty to disclose the existence and function of said system to the secretary by applying for a license as required under section five hundred eleven of this article or, where applicable, a supplemental license under section five hundred twelve of this article. This duty to disclose shall be triggered by the system's active deployment and usage in its intended context or field of operation and is applicable irrespective of the system's location of operation. This duty extends to any updates, modifications, upgrades, or expansions of the system's capabilities or intended uses. 2. Any person developing a system as defined in paragraph (i) of subdivision two of section five hundred one of this article within the state shall disclose in writing to the secretary the development of such a system prior to active development of the system. Such writing shall set forth the names and addresses of all persons involved in the development of such system, a description of the system, the systems functions and intended use cases, and measures that will be taken to ensure that any risks posed by the system are mitigated. The secretary may, upon receipt of such writing, require such person to cease development of such a system where, in the secretary's discretion, the secretary believes the system has a high likelihood of violating section five hundred twenty-nine or section five hundred thirty of this article. 3. The duties set forth in this section shall apply only to advanced artificial intelligence systems that more likely than not fall under the definition of high-risk advanced artificial intelligence system as defined in section five hundred one of this article. The secretary shall send notice to any system that is presently performing functions for its intended purpose or within its designated operational parameters which, in their discretion, may fall under the definition of high-risk advanced artificial intelligence systems but that has not registered with the secretary. In the notice, the secretary may require the creators of the system to cease development and access by private individuals or the general public, pending review. Such notice shall be binding and have the effect of law. Determinations that a system is a high-risk advanced artificial intelligence system shall be made in a hearing held pursuant to the provisions of section five hundred nine of this article. In such hearing, the administrator of such hearing shall accept comments from the public. Such hearing shall, to the extent practicable, not disclose any proprietary information concerning the advanced artificial intelligence system to the public.
Other · Automated Decisionmaking
State Tech. Law § 511(1)-(4)
Plain Language
No person may develop or operate a high-risk AI system in New York without first obtaining a license from the Secretary of State. For autonomous weapons systems, both development and operation require licensing. For all other high-risk systems, operation requires a license. Applications must be sworn, written, and accompanied by a fee set by regulation. Licenses remain valid indefinitely unless revoked, suspended, or surrendered. This is a pre-market authorization gate — no high-risk AI system may lawfully operate in the state without this license.
Statutory Text
§ 511. License. 1. No person shall (a) develop, in whole or in part, a high-risk advanced artificial intelligence system as defined in paragraph (i) of subdivision two of section five hundred one of this article or operate such a system that is presently performing functions for its intended purpose or within its designated operation parameters within the state where such system was developed outside of the state; or (b) operate a high-risk advanced artificial intelligence system other than a system as defined in paragraph (i) of subdivision two of section five hundred one of this article that is presently performing functions for its intended purpose or within its designated operational parameters within the state without first obtaining a license. 2. An application for a license under this article shall be in writing, under oath and in the form prescribed by the secretary. 3. At the time of filing an application for a license, the applicant shall pay to the secretary an application fee. Such application fee shall be prescribed pursuant to the rules and regulations of the secretary. 4. A license granted pursuant to this article shall be valid unless revoked or suspended by the secretary or surrendered by the licensee.
Other · Automated Decisionmaking
State Tech. Law § 512(1)-(2)
Plain Language
Entities (not natural persons) that already hold a license must obtain a separate supplemental license for each additional high-risk AI system they develop. Supplemental licenses follow the same application process and are subject to the same requirements as the initial license. This effectively means one license per system, preventing operators from using a single license to cover unlimited new systems.
Statutory Text
§ 512. Supplemental license. 1. Where a person other than a natural person is licensed under this article, such person shall apply for a supplemental license for each additional high-risk advanced artificial intelligence system such person develops after being licensed initially pursuant to section five hundred eleven of this article. 2. Notwithstanding any provision of law, rule or regulation to the contrary, a supplemental license shall be provided in the same manner as a license granted pursuant to the provisions of section five hundred eleven of this article and shall be subject to the same requirements, duties and prohibitions as provided for in this article.
R-02 Regulatory Disclosure & Submissions · R-02.3 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 513(1)-(4)
Plain Language
License applications must include: the applicant's identity and corporate details, the names and addresses of all ethics and risk management board members, principals, and officers, and a description of all known general use cases of the AI system. The Secretary conducts a substantive review and may deny the license if the applicant's ethics, experience, character, and fitness do not command community confidence. Denied applicants receive a license fee refund but not an investigation fee refund. This functions as both a regulatory submission and a character-fitness assessment for AI operators.
Statutory Text
§ 513. Application for licenses. 1. An application for a license required under this article shall be in writing, under oath, and in the form prescribed by the secretary, and shall contain the following: (a) the exact name and address of the applicant, and if the applicant be a co-partnership or association, the names of the members thereof, and if a corporation the date and place of its incorporation; (b) the name and the business and residential address of each member of the ethics and risk management board, each principal, and officer of the applicant; and (c) the description of all known general use cases of the advanced artificial intelligence system, including any purposes foreseen to be implemented by the applicant. A "use case" shall be defined as broad category of potential use. 2. After the filing of an application for a license accompanied by payment of the fees for license and investigation, it shall be substantively reviewed. After the application is deemed sufficient and complete, the secretary shall issue the license, or the secretary may refuse to issue the license if the secretary shall find that the ethics, experience, character and general fitness of the applicant or any person associated with the applicant are not such as to command the confidence of the community and to warrant the belief that the business will be conducted honestly, fairly and efficiently within the purposes and intent of this article. 3. If the secretary refuses to issue a license, the secretary shall notify the applicant of the denial, return to the applicant the sum paid as a license fee, but retain the investigation fee to cover the costs of investigating the applicant. 4. Each license issued pursuant to this article shall remain in full force unless it is surrendered by the licensee, revoked or suspended.
G-02 Public Transparency & Documentation · G-02.4 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 514(1)-(2)
Plain Language
Operators must conspicuously post their AI license in their physical office and, if they have a public internet presence, on their website or mobile application. The license is non-transferable and non-assignable. This creates a public transparency obligation — users and the public can verify an operator's licensed status.
Statutory Text
§ 514. License provisions and posting. 1. Any license issued under this article shall state the name and address of the licensee, and if the licensee be a co-partnership or association, the names of the members thereof, and if a corporation the date and place of its incorporation. 2. Such license or licenses shall be kept conspicuously posted in the office of the licensee and, where such licensee has a public internet presence, on the website or mobile application of the licensee and shall not be transferable or assignable.
G-01 AI Governance Program & Documentation · G-01.1G-01.5G-01.6 · Deployer · Automated Decisionmaking
State Tech. Law § 516(1)-(5)
Plain Language
Every operator must establish an independent ethics and risk management board of at least five individuals, none of whom may be members, officers, or directors of the operator's entity. The board must annually submit to the Secretary a comprehensive report covering: all possible use cases, thorough risk assessments for each use case, evaluation of whether certain applications should be constrained, mitigation plans, incident review, user education plans, conflicts of interest disclosure, and compliance updates. Board members face criminal liability (misdemeanor, up to $500 fine and/or 6 months imprisonment) for false statements, undisclosed conflicts, or misrepresentation of risks. Operators with multiple licensed systems need only one board. The independence requirement — no insiders on the board — is a key compliance detail.
Statutory Text
§ 516. Ethics and risk management board and reports. 1. Every operator of a licensed high-risk advanced artificial intelligence system or systems shall establish an ethics and risk management board composed of no less than five individuals who shall have the responsibility to assess the ethical implications of all possible use cases of the system, whether such use cases are intended or unintended, and whether likely or unlikely to be used, and the current operational outcomes of the system. Such operator, other than an operator who is a natural person, operating more than one high-risk advanced artificial intelligence system with a supplemental license shall not be required to have more than one ethics and risk management board for each system. 2. No member of an ethics and risk management board shall be a member, officer, or director within the operator's entity. No member shall be required to be employed by the operator. 3. Such board shall adopt rules governing its decision-making processes, duties and responsibilities. Such rules shall not conflict with the provisions of this article. 4. Annually, the ethics and risk management board of each operator shall submit to the secretary a comprehensive report for each licensed high-risk advanced artificial intelligence system which consists of the following: (a) All possible use cases, whether intended or unintended, whether likely or unlikely. (b) A thorough risk assessment for each use case, considering and evaluating the potential for harm, irrespective of the probability of such risk materializing. This shall include, but not be limited to, the system's potential impact on privacy, security, fairness, economic implications, societal well-being, and safety of persons and the environment. (c) A detailed evaluation of known use cases of the system by users, exploring whether certain applications ought to be constrained or banned due to ethical considerations. This shall include an assessment of the operator's capacity to impose such constraints on use cases. (d) A mitigation plan for each identified risk, including preemptive measures, monitoring processes, and responsive actions. This shall also include a communication strategy to inform users and stakeholders about potential risks and steps taken to mitigate them. (e) A comprehensive review of any incidents or failures of the system in the past year, detailing the circumstances, impacts, measures taken to address the issue, and modifications made to prevent such incidents in the future. (f) Any existing attempts to educate users and, based on the existing use of the system by users, a detailed plan on how the operator intends to inform and instruct users on the safe and ethical use of the system, considering varying levels of digital literacy among users. (g) A disclosure of any conflicts of interest within the ethics board, which could potentially influence the board's decisions and recommendations. This shall include measures to manage and resolve such conflicts. (h) An update on the measures taken by the operator to ensure the system's adherence to existing laws, regulations, and ethical guidelines related to artificial intelligence. 5. In addition to any applicable civil penalties pursuant to section five hundred eight of this article, a member of an ethics and risk management board who makes a false statement, fails to disclose conflicts of interest or misrepresents the risks or severity of the risks posed by a system in the performance of their duties as a member of such board, shall be guilty of a misdemeanor and, upon conviction, shall be fined not more than five hundred dollars or imprisoned for not more than six months or both, in the discretion of the court.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated Decisionmaking
State Tech. Law § 516(4)
Plain Language
The ethics and risk management board must annually submit a comprehensive report to the Secretary for each licensed system. The report must cover all possible use cases, detailed risk assessments, evaluation of which applications should be constrained, mitigation plans, incident and failure reviews, user education plans, board conflicts of interest, and compliance updates. This is a scheduled proactive regulatory submission — operators cannot wait to be asked. The scope is very broad, requiring assessment even of unlikely and unintended use cases.
Statutory Text
4. Annually, the ethics and risk management board of each operator shall submit to the secretary a comprehensive report for each licensed high-risk advanced artificial intelligence system which consists of the following: (a) All possible use cases, whether intended or unintended, whether likely or unlikely. (b) A thorough risk assessment for each use case, considering and evaluating the potential for harm, irrespective of the probability of such risk materializing. This shall include, but not be limited to, the system's potential impact on privacy, security, fairness, economic implications, societal well-being, and safety of persons and the environment. (c) A detailed evaluation of known use cases of the system by users, exploring whether certain applications ought to be constrained or banned due to ethical considerations. This shall include an assessment of the operator's capacity to impose such constraints on use cases. (d) A mitigation plan for each identified risk, including preemptive measures, monitoring processes, and responsive actions. This shall also include a communication strategy to inform users and stakeholders about potential risks and steps taken to mitigate them. (e) A comprehensive review of any incidents or failures of the system in the past year, detailing the circumstances, impacts, measures taken to address the issue, and modifications made to prevent such incidents in the future. (f) Any existing attempts to educate users and, based on the existing use of the system by users, a detailed plan on how the operator intends to inform and instruct users on the safe and ethical use of the system, considering varying levels of digital literacy among users. (g) A disclosure of any conflicts of interest within the ethics board, which could potentially influence the board's decisions and recommendations. This shall include measures to manage and resolve such conflicts. (h) An update on the measures taken by the operator to ensure the system's adherence to existing laws, regulations, and ethical guidelines related to artificial intelligence.
S-01 AI System Safety Program · S-01.4S-01.7 · Deployer · Automated Decisionmaking
State Tech. Law § 517(1)-(4)
Plain Language
The Secretary conducts periodic source code and outcome reviews of each licensed high-risk AI system, at a frequency determined by system risk, complexity, update frequency, and compliance history. The Secretary issues binding recommendations based on these reviews. Operators must then consult with the Secretary, produce a binding detailed implementation plan with a timeline, and execute it. Plan amendments are permitted only for unexpected occurrences and require Secretary approval within 30 days. Non-compliance with recommendations triggers fines and penalties. This creates an ongoing government-supervised safety review cycle — not a one-time pre-deployment check.
Statutory Text
§ 517. Source code and outcome review. 1. The secretary shall conduct periodic evaluations of the source code and outcomes associated with each high-risk advanced artificial intelligence system. These examinations shall determine whether the system is in compliance with this article. The timing and frequency of these reviews shall be determined at the secretary's discretion, taking into account the potential risk posed by the system, the complexity of the system, the frequency of updates and upgrades, the complexity of such updates and upgrades, and any previous issues of non-compliance. 2. Upon completion of the review, the secretary is empowered to make binding recommendations to the operator to ensure the system's functionality and outcomes are aligned with the principles in the advanced artificial intelligence ethical code of conduct pursuant to section five hundred twenty-nine of this article, restrictions on prohibited artificial intelligence systems pursuant to section five hundred thirty of this article, and limitations and procedures for source code modifications, updates, upgrades, and rewrites pursuant to section five hundred nineteen of this article. 3. Following receipt of the secretary's recommendations, the operator shall consult with the secretary to determine the feasibility of implementing the recommendations and the time frame in which such recommendations can be implemented to ensure full compliance with the secretary's recommendations. The operator shall provide a detailed plan outlining how the recommendations will be addressed, along with a timeline for their implementation. The detailed plan shall be binding on the operator; provided however that where an unexpected occurrence arises which causes changes to such plan, the operator shall be entitled to extend such timeline or alter such plans where such operator notifies the secretary in writing regarding the unexpected occurrence and, within such writing, sets forth amendments to the detailed plan and timeline. The secretary shall have thirty days to approve or reject such amendments. Where such amendments are rejected, the operator shall continue with their original plan and timeline. 4. The secretary shall monitor the operator's compliance with such recommendations and may impose fines and other penalties pursuant to the provisions of this article for non-compliance that the secretary shall deem just and proportionate to the violation.
S-01 AI System Safety Program · S-01.1 · Developer · Automated Decisionmaking
State Tech. Law § 518(1)-(5)
Plain Language
Developers of high-risk AI systems — whether licensed or not — may not willfully or negligently allow their source code to become uncontained (i.e., reproduced so widely it becomes impossible to control). Written Secretary authorization is required for any intentional release that could lead to uncontainment. Criminal penalties attach to individuals: class E felony for willful uncontainment, class A misdemeanor for negligent uncontainment, and class C felony for uncontaining financial systems or prohibited AI systems. The knowledge defense protects individuals who had no explicit or implicit awareness of the risk. This effectively creates a containment obligation for high-risk AI source code.
Statutory Text
§ 518. Willfully or negligently uncontaining high-risk source code. 1. No licensee or non-licensee who develops a high-risk advanced artificial intelligence system shall willfully or negligently uncontain their source code except where authorized by the secretary in writing. 2. Any member, officer, director or employee of an entity who willfully violates subdivision one of this section shall be guilty of a class E felony. 3. Any member, officer, director or employee of an entity who negligently violates subdivision one of this section shall be guilty of a class A misdemeanor. 4. Where any member, officer, director or employee or an entity willfully or negligently uncontains a high-risk advanced artificial intelligence system described in paragraph (f) of subdivision two of section five hundred one of this article or a prohibited high-risk advanced artificial intelligence system as described in section five hundred thirty of this article shall be guilty of a class C felony. 5. The provisions of this section shall not be construed as imposing liability on any member, officer, director or employee who had no explicit or implicit knowledge of the risk or circumstances that caused the uncontainment of the high-risk advanced artificial intelligence system.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated Decisionmaking
State Tech. Law § 519(1)-(5)
Plain Language
Licensees must obtain written Secretary approval before deploying any source code modification or upgrade — but not minor updates. Modifications (changes to decision-making logic) and upgrades (new features) require a written submission detailing purpose, new functions, reasons, and risk assessment. The Secretary has 30 business days to approve (extendable by 30 more), with deemed approval if no response. Rewrites (substantial changes resulting in a new version) are reviewed as new applications with a 180-business-day timeline. All changes must be developed in pre-production. Updates — defined as minor enhancements, bug fixes, and performance improvements — are exempt. This creates a pre-deployment approval gate for all material system changes.
Statutory Text
§ 519. Source code modifications, updates, upgrades, and rewrites. 1. Where a licensee intends to modify or upgrade the source code of their high-risk advanced artificial intelligence system, such licensee shall be required to inform the secretary of such modification or upgrade and shall be prohibited from implementing such modification or upgrade in an accessible version of the system without express consent of the secretary in writing. This section shall not apply to source code updates. 2. A licensee shall, in writing to the secretary, set forth the purpose of the modification or upgrade, the new functions added to the system or the functions modified, the reason for the modification or upgrade, and an assessment of new risks or risks that may be more probable as a result of the modification or upgrade. The secretary shall, upon receipt of notice, have thirty business days to provide the licensee with approval of the modification or upgrade. Where approval is not received within thirty business days, absent an extension in writing which shall not exceed thirty additional business days, the modification or upgrade shall be deemed approved. Nothing in this subdivision shall be construed as limiting the ability of the secretary to take any action they are authorized to take in relation to the approved modification or upgrade. Where the secretary rejects the modification or upgrade, the secretary shall set forth in writing the reasons for the rejection and steps that the licensee can take to receive approval. Where the secretary approves the modification or upgrade, the licensee may immediately implement such modification or upgrade in a publicly accessible version. 3. A licensee who rewrites the source code of its system shall comply with the same standards set forth in subdivisions one and two of this section provided however that the secretary shall examine such source code in the same manner as a new application and shall provide a letter of approval or rejection upon completion of such review within one hundred eighty business days of receipt of such notices except where the secretary requires an extension of time, then an extension of no more than one hundred eighty days shall be authorized. Where the secretary rejects the rewrite, such letter of rejection shall state the reasons for the rejection and steps that the licensee can take to correct such rejection, if any. Where the secretary approves the modification or upgrade, the licensee may immediately implement such modification or upgrade in a publicly accessible version. 4. All modifications, upgrades, and rewrites shall be conducted in a pre-production environment, which shall mean any stage prior to the accessible version. 5. For purposes of this section: (a) "Modify" shall mean altering the source code of the system to alter the way by which the system, or any features within the system, makes decisions. (b) "Upgrade" shall mean altering the source code of the system which gives it new features or functions. (c) "Rewrite" shall mean a change in the source code to such a substantial degree that: (i) it effectively results in a new version of the system; or (ii) the change nullifies all or a substantial amount of the initial findings of the secretary in the operator's original application. (d) "Update" shall mean a change to the source code that includes minor enhancements, improvements, modifications, error corrections, cosmetic changes, or any other change intended to increase the functionality, compatibility, security or performance of the system. (e) "Accessible version" shall mean a version of the software that is available to the public or for private use or that is presently operating within its designated operational parameters.
R-01 Incident Reporting · R-01.1 · Deployer · Automated Decisionmaking
State Tech. Law § 520(1)-(2)
Plain Language
Licensees must report system malfunctions to the Department whenever the system fails to operate as intended for a period sufficient to have caused or have the capacity to cause harm. For systems interacting with law enforcement, government agencies, or weapons systems, additional notification to relevant law enforcement or government entities is required as a license condition. The 'significant period' trigger is harm-based rather than time-based — any malfunction with harm capacity triggers the duty regardless of duration.
Statutory Text
§ 520. Malfunction and incident reports; duty to notify. 1. A licensee shall have the duty to notify the department and, if applicable, a relevant law enforcement agency or governmental entity where the licensee's system fails to operate as intended for any significant period of time. A period of time is deemed "significant" for purposes of this section where the period of time that the malfunction occurred had the capacity to or has harmed a person or persons. 2. A licensee shall have the duty to notify a relevant law enforcement agency or governmental entity of a malfunction where designated by the department upon receipt of a license. The secretary shall issue such a requirement upon the licensee where such systems interact with law enforcement systems or the systems of a government agency, engage in law enforcement functions or the functions of a government agency, or where such systems operate, in whole or in part, or are, a weapon.
D-01 Automated Processing Rights & Data Controls · D-01.5 · Deployer · Automated DecisionmakingBiometrics
State Tech. Law § 522(1)-(3)
Plain Language
Licensees may share information and source code with third parties, but when biometric information (faceprints, voiceprints, fingerprints, gaitprints, irisprints, psychological profiles, or other identifying body/mind data) is shared, the receiving party becomes jointly liable for any harm or violations. The Secretary may prohibit specific persons from accessing a licensee's information or source code with written justification. This applies only to information received or generated by the licensee and source code created by the licensee — not to third-party system integration. The joint liability provision for biometric data sharing is a significant compliance consideration for data partnerships.
Statutory Text
§ 522. Information and source code sharing. 1. Licensees shall be permitted to share information and source code with any third party, provided however, that where information is biometric information such party shall be jointly liable for any harm or violations under this article with the licensee. The secretary may, in their discretion, prohibit any person from accessing the information or source code of a licensee provided however that the secretary shall provide a written justification for such a prohibition. 2. For purposes of this section, "biometric information" shall include a person's: (a) faceprint; (b) voiceprint; (c) fingerprint; (d) gaitprint; (e) irisprint; (f) psychological profile; or (g) any other data related to a person's body or mind that can be used to identify a person. 3. This section shall only apply to the sharing of information received or generated by the licensee or source code created by the licensee and shall not apply to a third party integrating their systems with the licensee.
Other · Automated Decisionmaking
State Tech. Law § 523(1)-(2)
Plain Language
Third-party systems that integrate with a licensed high-risk AI system must obtain a certificate of compliance from the Department before integration, demonstrating conformance with cybersecurity standards. If the integration gives the third party new high-risk AI capabilities, it must obtain its own full license. Only one certificate is needed regardless of how many licensees the third party integrates with. This creates a supply-chain compliance obligation — licensees should verify that any third-party integration partner holds a certificate of compliance.
Statutory Text
§ 523. Third-party systems; certificates of compliance. 1. Non-licensee third-party systems may integrate with a licensee under the following conditions: (a) Where a third-party system assists in the proper functioning of the licensee or where such system provides additional services to the licensee's service-offerings, such a system shall not be required to obtain a license but shall be required to obtain a certificate of compliance in accordance with this section. (b) No third-party system may access the system of a licensee to provide itself with new high-risk advanced artificial intelligence capabilities without first obtaining a license. 2. Every third-party system which integrates with a licensee shall, prior to integration, apply for and receive a certificate of compliance. Such certificate shall be issued by the department and shall only be issued where such third-party system is assessed by the department and the department finds it conforms to the cybersecurity standards set by the office. The secretary shall set the rules and regulations regarding the application and requirements of receiving a certificate of compliance. This section shall not be construed as requiring any third-party system to receive more than one certificate of compliance.
G-01 AI Governance Program & Documentation · G-01.3G-01.4 · Deployer · Automated Decisionmaking
State Tech. Law § 524
Plain Language
Every licensed high-risk AI system must automatically generate operational logs every time it operates. Logs must conform to Secretary-prescribed standards covering event types, format, access controls, encryption, cybersecurity, preservation, and disposal. Logs must be retained for 10 years from generation and are subject to regulatory inspection. The 10-year retention period is significantly longer than typical AI recordkeeping requirements (usually 2-5 years). Operators should plan for substantial data storage and security infrastructure to meet this obligation.
Statutory Text
§ 524. Logging. Every time a licensee's system operates it shall automatically generate a log. Standards related to the specific types of events that are required to be logged, the format in which logs must be kept, the individuals or entities permitted to access logs and the conditions governing such access, the encryption and cybersecurity protocols to be applied to logs, the procedures for both the preservation and disposal of logs, and any other actions pertinent to log management shall conform to the standards set by the secretary. Such logs shall be preserved for a period of ten years from the date they are generated and shall be subject to inspection under section five hundred twenty-six of this article.
S-01 AI System Safety Program · S-01.1 · Deployer · Automated Decisionmaking
State Tech. Law § 525
Plain Language
Every licensee must maintain kill-switch capability — internal controls that can safely and indefinitely halt the operation of the entire system or a major part of it within a reasonable time after initiation. This is an ongoing operational requirement, not a one-time design obligation. The controls must be able to sustain indefinite shutdown, not just temporary pauses.
Statutory Text
§ 525. Internal controls; ceasing operation. Every licensee shall have in place internal controls that, within a reasonable time following initiation, can safely and indefinitely cease the operation of the system or a major part of the system.
G-01 AI Governance Program & Documentation · G-01.3G-01.4 · Deployer · Automated Decisionmaking
State Tech. Law § 527(1)-(2)
Plain Language
Operators must maintain all books, records, source code, and logs required by the Secretary, including at minimum all system-generated logs and a backup of every version of the system, stored securely per Secretary standards. Operators must also file annual reports on business and operations, sworn under penalty of perjury. The Secretary may demand additional regular or special reports at any time. Combined with the § 524 logging requirement and 10-year retention, this creates a comprehensive documentation and recordkeeping obligation covering the full operational lifecycle of every licensed system.
Statutory Text
§ 527. Books, records, source code, and logs to be kept. 1. Every operator shall maintain such books, records, source code, and logs as the secretary shall require provided however that every operator shall, at least, maintain a copy of all logs generated from the system as well as a backup of every version of the system which shall be stored in a safe manner as prescribed by the secretary. 2. By a date to be set by the secretary, each operator shall annually file a report with the secretary giving such information as the secretary may require concerning the business and operations during the preceding calendar year of the operator within the state under the authority of this article. Such report shall be subscribed and affirmed as true by the operator under the penalties of perjury and be in the form prescribed by the secretary. In addition to such annual reports, the secretary may require of operators such additional regular or special reports as the secretary may deem necessary to the proper supervision of operators under this article. Such additional reports shall be in the form prescribed by the secretary and shall be subscribed and affirmed as true under the penalties of perjury.
Other · Automated Decisionmaking
State Tech. Law § 529
Plain Language
This ethical code is binding on all persons developing or operating high-risk AI systems — both licensed and unlicensed. It establishes nine principles: respect for autonomy (no undue manipulation), equity (no bias or discrimination based on protected characteristics), accountability (clear mechanisms for addressing harms), care (no unjustified harm), trust (privacy and data security), inclusivity (diverse user access), oversight (meaningful human oversight), notice (transparency to affected persons), and safety (robustness and misuse prevention). Though framed as principles, these are legally binding and enforceable through the Department's enforcement authority. Violations of this code provide grounds for license actions and civil penalties under § 508.
Statutory Text
§ 529. Advanced artificial intelligence ethical code of conduct. The following ethical code of conduct shall be binding on all licensees and non-licensees who develop or operate a high-risk advanced artificial intelligence system: Respect: Artificial intelligence systems should respect human autonomy and not unduly influence or manipulate individuals' behavior or decisions. Equity: An artificial intelligence system should provide equitable outcomes, irrespective of any characteristics protected by law. They should not perpetuate existing biases, discrimination, or disparities. Accountability: Persons that design, develop, deploy, or use artificial intelligence systems should be held accountable for the impacts and outcomes of these systems except where the law provides otherwise. Clear mechanisms for addressing harms and violations of law should be in place. Care: Artificial intelligence systems should not cause harm or adversely affect individuals, society, or the environment without legal justification. Trust: Artificial intelligence systems should respect individuals' privacy rights, and securely handle personal and sensitive data in accordance with applicable laws and regulations. Inclusivity: Artificial intelligence systems should be designed, developed, and used in ways that are inclusive, serving a diverse range of users and contexts. Oversight: There should always be meaningful human oversight of artificial intelligence systems to ensure ethical use and decision-making. Notice: The operations, decision-making processes, and use of artificial intelligence systems should, where feasible, be made known to persons affected by them. Safety: Artificial intelligence systems should be robust, secure, and reliable. They should have mechanisms in place to prevent misuse or harmful outcomes.
S-02 Prohibited Conduct & Output Restrictions · S-02.1 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 530(1)(a)-(e), (2)-(7)
Plain Language
Five categories of AI systems are categorically prohibited: (1) subliminal manipulation techniques causing physical or psychological harm or exploiting vulnerable groups; (2) systems designed to inflict harm without law enforcement or self-defense justification; (3) predictive behavioral systems that infringe on individual liberty or financial interests without legal justification; (4) systems that unlawfully acquire, retain, or disseminate sensitive personal information; and (5) autonomous weapons lacking meaningful human supervision or control. The Secretary may demand immediate cessation of development or operation, and such demands are binding unless challenged through a formal hearing — but the system must remain shut down during the challenge. Individuals who knowingly operate prohibited systems face class D felony charges and civil penalties equal to the greater of profits earned or damages caused. A narrow exception exists for state-authorized systems developed and used with continuous state oversight following public hearing. After a Secretary demand, all officers and directors are rebuttably presumed to have knowledge.
Statutory Text
§ 530. Prohibited artificial intelligence systems. 1. No person shall develop, in whole or in part, or operate an artificial intelligence system within the state where such a system performs any of the following, whether or not it is the system's main function: (a) the deployment of subliminal techniques that operate beyond an individual's conscious awareness, with the express purpose of materially distorting an individual's behavior in such a manner that leads to, or possesses a high likelihood of leading to, physical or psychological harm to that individual or another, or that leverages the vulnerabilities of a defined group of individuals to similar ends; (b) the infliction of physical or emotional harm upon individuals without any valid law enforcement or self-defense purpose or justification; (c) the prediction of an individual's future actions or behaviors, followed by subsequent reactions based on these predictions, carried out in such a way that, without legal justification, infringes upon or compromises the individual's liberty, emotional, psychological, or financial interests; (d) the unauthorized acquisition, retention, or dissemination of or access to sensitive personal information or non-public data in violation of applicable data privacy, security, and hacking laws; or (e) the implementation of any form of autonomous weapon system designed to inflict harm on persons, property, or the environment that lack meaningful human supervision or control. "Meaningful human supervision or control" shall mean the ability to actively manage, intervene, or override the autonomous weapon system's functions. 2. Where the secretary discovers the development or operation of a prohibited artificial intelligence system, the secretary may, in writing, demand that the person who is developing or operating such system cease development or operation of or access to such a system within a period of time as the secretary deems necessary to prevent the system from widespread use or, if the system is operational or accessible to persons for use, to ensure the system is properly terminated in such a way to minimize risks of harm to individuals, society, or the environment. A demand made pursuant to this section shall be finally and irrevocably binding on the person unless the person against whom the demand is made shall, within such period of time set by the secretary, after the giving of notice of such determination, petition the department for a hearing to determine the legal findings of the secretary. The person developing or operating such a prohibited system shall, prior to petition, cease development, operation, and access to the system until and unless such determination is favorable to the person. Such determination may be appealed by any party as of right. 3. The secretary shall not grant a license pursuant to this article to any high-risk advanced artificial intelligence system described under this section except as described in subdivision seven of this section. 4. Any member, officer, director or employee of an operator of any entity who knowingly publicly or privately operates any system described in this section shall be guilty of a class D felony and shall incur a civil penalty of the amount earned from the creation of the prohibited system or the amount of damages caused by the system, whichever is greater. 5. This section shall not be construed as imposing liability on any member, officer, director or employee who had no explicit or implicit knowledge of the prohibited high-risk advanced artificial intelligence system provided however that where the secretary sends a demand to cease the development, operation, or access to such system all members, officers, and directors shall be rebuttably presumed to have knowledge of the prohibited high-risk advanced artificial intelligence system. 6. This section shall be construed as prohibiting the development of a prohibited high-risk advanced artificial intelligence system or making such a system accessible to persons in the state of New York. 7. Notwithstanding subdivision one of this section, a person may develop a prohibited high-risk advanced artificial intelligence system where authorized by the secretary, provided that such system is developed and used only by the state or with substantial, continuous oversight by the state and such system is authorized only after public hearing and comment in accordance with section five hundred nine of this article.
Other · Automated Decisionmaking
State Tech. Law § 521(1)-(2)
Plain Language
The Secretary may impose unique requirements by regulation on systems deemed to pose state or national security risks, assessed on a case-by-case basis. A system is deemed a security risk if its malfunction or misuse could disrupt critical infrastructure, trigger conflicts, undermine democracy, compromise classified information, harm significant populations, destabilize financial markets, cause irreversible environmental damage, or significantly harm the social fabric. This is not to be liberally construed to cover any system that could theoretically cause harm — it requires a high risk of the specified categories. The actual requirements will be set by future regulation.
Statutory Text
§ 521. State and national security risks. 1. The secretary may, by regulation, designate unique requirements for systems which, in the secretary's discretion, pose a risk to state or national security. Such systems shall be assessed on a case-by-case basis and shall not be liberally construed as including any system that, where used improperly, inherently possesses the ability to harm persons or property. 2. A high-risk advanced artificial intelligence system shall be deemed to pose a risk to state or national security where the system's malfunctioning or misuse poses a high risk of: (a) disrupting critical infrastructure; (b) triggering or escalating existing conflicts; (c) undermining or impacting the democratic process; (d) causing unauthorized access to classified information as designated by a relevant governmental entity; (e) harming a significant portion of the population or a specific segment of the population; (f) negatively impacting financial markets or economic stability; (g) causing consequential or irreversible damage to the environment; or (h) causing significant harm to the social fabric.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated Decisionmaking
State Tech. Law § 526(1)-(4)
Plain Language
The Secretary has broad investigative and examination authority over all licensees and any person suspected of violating this article. The Secretary may compel testimony under oath, subpoena witnesses, and require production of books, records, accounts, documents, source code, and logs. Examination costs — including travel and subsistence — are assessed against and paid by the examined licensee. All investigation reports and correspondence are confidential and not subject to subpoena, unless the Secretary determines publication serves justice and public advantage. Operators must be prepared to produce all records and source code on demand and must budget for examination cost assessments.
Statutory Text
§ 526. Investigations and examinations. 1. The secretary shall have the power to make such investigations as the secretary shall deem necessary to determine whether any operator or any other person has violated any of the provisions of this article, or whether any licensee has conducted itself in such manner as would justify the revocation of its license, and to the extent necessary therefor, the secretary may require the attendance of and examine any person under oath, and shall have the power to compel the production of all relevant books, records, accounts, documents, source code, and logs. 2. The secretary shall have the power to make such examinations of the books, records, accounts, documents, source code, and logs used in the business of any licensee as the secretary shall deem necessary to determine whether any such licensee has violated any of the provisions of this article. 3. The expenses incurred in making any examination pursuant to this section shall be assessed against and paid by the licensee so examined, except that traveling and subsistence expenses so incurred shall be charged against and paid by licensees in such proportions as the secretary shall deem just and reasonable, and such proportionate charges shall be added to the assessment of the other expenses incurred upon each examination. Upon written notice by the secretary of the total amount of such assessment, the licensee shall become liable for and shall pay such assessment to the secretary. 4. All reports of examinations and investigations, and all correspondence and memoranda concerning or arising out of such examinations or investigations, including any duly authenticated copy or copies thereof in the possession of any licensee or the department, shall be confidential communications, shall not be subject to subpoena and shall not be made public unless, in the judgment of the secretary, the ends of justice and the public advantage will be subserved by the publication thereof, in which event the secretary may publish or authorize the publication of a copy of any such report or other material referred to in this subdivision, or any part thereof, in such manner as the secretary may deem proper.