A-03356
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2025-07-26
New York Assembly Bill 3356 — Advanced Artificial Intelligence Licensing Act
Establishes a comprehensive licensing and registration regime for high-risk advanced AI systems in New York, administered by the Department of State and the Secretary of State. Requires all operators of high-risk AI systems to obtain a license before development or operation, establish independent ethics and risk management boards, submit annual comprehensive risk assessment reports, obtain pre-approval for source code modifications and upgrades, maintain operational logs for 10 years, and implement internal shutdown controls. Creates an Advisory Council for Artificial Intelligence composed of industry members and state agency heads. Categorically prohibits certain AI applications including subliminal manipulation systems, autonomous weapons without meaningful human control, and predictive behavioral systems that infringe on individual liberty. Imposes criminal penalties ranging from misdemeanors to class C felonies for violations including willful uncontainment of high-risk source code and operation of prohibited systems. No private right of action is created; enforcement is exclusively through the Department of State and the Attorney General.
Summary

Establishes a comprehensive licensing and registration regime for high-risk advanced AI systems in New York, administered by the Department of State and the Secretary of State. Requires all operators of high-risk AI systems to obtain a license before development or operation, establish independent ethics and risk management boards, submit annual comprehensive risk assessment reports, obtain pre-approval for source code modifications and upgrades, maintain operational logs for 10 years, and implement internal shutdown controls. Creates an Advisory Council for Artificial Intelligence composed of industry members and state agency heads. Categorically prohibits certain AI applications including subliminal manipulation systems, autonomous weapons without meaningful human control, and predictive behavioral systems that infringe on individual liberty. Imposes criminal penalties ranging from misdemeanors to class C felonies for violations including willful uncontainment of high-risk source code and operation of prohibited systems. No private right of action is created; enforcement is exclusively through the Department of State and the Attorney General.

Enforcement & Penalties
Enforcement Authority
The Department of State has primary enforcement authority, with discretion to issue, revoke, cancel, or suspend licenses and to impose civil or criminal penalties against any person found to have violated the article. The Attorney General brings enforcement actions at the request of the Department. The Secretary of State may order summary suspension of licenses, administrative seizure of services, stop orders, and other emergency actions when public health, safety, or welfare requires. Investigators appointed by the Department are deemed peace officers for purposes of enforcing this article. No private right of action is created.
Penalties
Civil penalty not to exceed the greater of the amount gained from the violation or the actual damages caused by the violation; penalty must be proportionate to the violation. Criminal penalties include: misdemeanor (up to $500 fine and/or up to 6 months imprisonment) for false material statements, failure to disclose conflicts of interest by ethics board members, or misrepresentation of risks; class E felony for willful uncontainment of high-risk source code; class A misdemeanor for negligent uncontainment; class C felony for willful or negligent uncontainment of financial systems or prohibited systems; class D felony for knowingly operating a prohibited AI system, plus civil penalty equal to the greater of amount earned from the prohibited system or damages caused. The Attorney General may seek equitable and injunctive relief. Punitive fines may be imposed for willful failure to register.
Who Is Covered
"Operator" shall mean the person who distributes and has control over the development of a high-risk advanced artificial intelligence system. Where a high-risk advanced artificial intelligence system is publicly accessible code, the operator shall be deemed the platform or platforms which host the system.
"Person" shall mean any individual, group of individuals, partnership, corporation, association or any other entity.
What Is Covered
"Advanced artificial intelligence system" shall mean any digital application or software, whether or not integrated with physical hardware, that autonomously performs functions traditionally requiring human intelligence. This includes, but is not limited to the system: (a) Having the ability to learn from and adapt to new data or situations autonomously; or (b) Having the ability to perform functions that require cognitive processes such as understanding, learning, or decision-making for each specific task.
"High-risk advanced artificial intelligence system" shall mean any advanced artificial intelligence system that possesses capabilities that can cause significant harm to the liberty, emotional, psychological, financial, physical, or privacy interests of an individual or groups of individuals, or which have significant implications on governance, infrastructure, or the environment. The director shall assess any such public or private system in determining whether such system requires registration. High-risk advanced artificial intelligence systems shall, at least, include systems that are designed to, whether directly or indirectly, on purpose or without purpose, do the following: (a) Cause material harm to persons, wildlife, or the environment; (b) Manage, control, or significantly influence healthcare or healthcare-related systems, including but not limited to, diagnosis, treatment plans, pharmaceutical recommendation, or storing of patient records; (c) Operate, control, or guide motor vehicles, aircraft, or any other forms of transport which, if it were to malfunction, has a high probability of posing a risk to human safety or environmental integrity; (d) Psychologically profile individuals for the purpose of targeted advertising, behavioral prediction, or the manipulation of user experiences and interactions in products or services; (e) Manage, control, or create critical infrastructure, including but not limited to the supply of water, electricity, gas, and heating, or construction; (f) Facilitate, control, or significantly impact financial systems, including but not limited to control of stock exchanges, stock trading, credit scoring, or other activities where inaccuracies or failures could lead to substantial economic harm for individuals or broader financial instability; (g) Assist, replace, or augment human decision-making in law enforcement, the judiciary, the executive, the legislature, or any government agency; (h) Enable advanced surveillance capabilities; (i) Involve the use or development of autonomous weapons systems that can cause harm, destruction, or engage in conflict without meaningful human intervention; and (j) Decode or interpret neural or cognitive activity.
Compliance Obligations 25 obligations · click obligation ID to open requirement page
R-02 Regulatory Disclosure & Submissions · R-02.3 · Developer · Automated Decisionmaking
State Tech. Law § 510(1)
Plain Language
Any person who develops a high-risk advanced AI system in New York that is actively deployed must disclose the system's existence and function to the Secretary of State by applying for a license or supplemental license. This registration duty is triggered by active deployment, applies regardless of where the system physically operates, and extends to any updates, modifications, upgrades, or expansions of the system's capabilities or intended uses. This is a continuing obligation — not a one-time filing.
Statutory Text
Any person who develops a high-risk advanced artificial intelligence system, whether in whole or in part, in the state that is presently performing functions for its intended purpose or within its designated operational parameters, shall have the duty to disclose the existence and function of said system to the secretary by applying for a license as required under section five hundred eleven of this article or, where applicable, a supplemental license under section five hundred twelve of this article. This duty to disclose shall be triggered by the system's active deployment and usage in its intended context or field of operation and is applicable irrespective of the system's location of operation. This duty extends to any updates, modifications, upgrades, or expansions of the system's capabilities or intended uses.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Developer · Automated Decisionmaking
State Tech. Law § 510(2)
Plain Language
Developers of autonomous weapons systems (§ 501(2)(i)) must submit a written pre-development disclosure to the Secretary of State before beginning active development. The disclosure must include the names and addresses of all persons involved, a system description, functions and intended use cases, and risk mitigation measures. The Secretary may order a cease-development if the system is likely to violate the ethical code of conduct or the prohibited systems provisions. This is a heightened obligation that applies only to the autonomous weapons subcategory of high-risk systems.
Statutory Text
Any person developing a system as defined in paragraph (i) of subdivision two of section five hundred one of this article within the state shall disclose in writing to the secretary the development of such a system prior to active development of the system. Such writing shall set forth the names and addresses of all persons involved in the development of such system, a description of the system, the systems functions and intended use cases, and measures that will be taken to ensure that any risks posed by the system are mitigated. The secretary may, upon receipt of such writing, require such person to cease development of such a system where, in the secretary's discretion, the secretary believes the system has a high likelihood of violating section five hundred twenty-nine or section five hundred thirty of this article.
Other · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 511(1)
Plain Language
No person may develop or operate a high-risk advanced AI system in New York without first obtaining a license from the Secretary of State. For autonomous weapons systems (§ 501(2)(i)), the license requirement applies both to in-state development and to in-state operation of systems developed elsewhere. For all other high-risk systems, the license requirement applies to in-state operation. Operating without a license is prohibited regardless of where the system was developed.
Statutory Text
No person shall (a) develop, in whole or in part, a high-risk advanced artificial intelligence system as defined in paragraph (i) of subdivision two of section five hundred one of this article or operate such a system that is presently performing functions for its intended purpose or within its designated operation parameters within the state where such system was developed outside of the state; or (b) operate a high-risk advanced artificial intelligence system other than a system as defined in paragraph (i) of subdivision two of section five hundred one of this article that is presently performing functions for its intended purpose or within its designated operational parameters within the state without first obtaining a license.
Other · Developer · Automated Decisionmaking
State Tech. Law § 512(1)-(2)
Plain Language
Entity licensees (non-natural persons) that develop additional high-risk AI systems after their initial license must obtain a separate supplemental license for each new system. The supplemental license is subject to the same application requirements, duties, and prohibitions as the initial license. This means each high-risk system must be individually authorized by the Secretary.
Statutory Text
1. Where a person other than a natural person is licensed under this article, such person shall apply for a supplemental license for each additional high-risk advanced artificial intelligence system such person develops after being licensed initially pursuant to section five hundred eleven of this article. 2. Notwithstanding any provision of law, rule or regulation to the contrary, a supplemental license shall be provided in the same manner as a license granted pursuant to the provisions of section five hundred eleven of this article and shall be subject to the same requirements, duties and prohibitions as provided for in this article.
R-02 Regulatory Disclosure & Submissions · R-02.3 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 513(1)-(4)
Plain Language
License applications must be in writing, under oath, and include: the applicant's name and address (including partnership/corporate details), the names and addresses of each ethics and risk management board member and each principal and officer, and a description of all known general use cases of the system. The Secretary substantively reviews each application and may refuse to issue a license based on the ethics, experience, character, and fitness of the applicant. A denied applicant receives a license fee refund but not the investigation fee. Licenses remain in force until surrendered, revoked, or suspended.
Statutory Text
1. An application for a license required under this article shall be in writing, under oath, and in the form prescribed by the secretary, and shall contain the following: (a) the exact name and address of the applicant, and if the applicant be a co-partnership or association, the names of the members thereof, and if a corporation the date and place of its incorporation; (b) the name and the business and residential address of each member of the ethics and risk management board, each principal, and officer of the applicant; and (c) the description of all known general use cases of the advanced artificial intelligence system, including any purposes foreseen to be implemented by the applicant. A "use case" shall be defined as broad category of potential use. 2. After the filing of an application for a license accompanied by payment of the fees for license and investigation, it shall be substantively reviewed. After the application is deemed sufficient and complete, the secretary shall issue the license, or the secretary may refuse to issue the license if the secretary shall find that the ethics, experience, character and general fitness of the applicant or any person associated with the applicant are not such as to command the confidence of the community and to warrant the belief that the business will be conducted honestly, fairly and efficiently within the purposes and intent of this article. 3. If the secretary refuses to issue a license, the secretary shall notify the applicant of the denial, return to the applicant the sum paid as a license fee, but retain the investigation fee to cover the costs of investigating the applicant. 4. Each license issued pursuant to this article shall remain in full force unless it is surrendered by the licensee, revoked or suspended.
G-02 Public Transparency & Documentation · G-02.4 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 514(1)-(2)
Plain Language
Licensees must conspicuously display their license in their physical office and, if they have an internet presence, on their website or mobile application. The license must state the licensee's name, address, and corporate details. Licenses are non-transferable and non-assignable. This is a public transparency obligation — the purpose is to enable users and the public to verify that an operator is licensed.
Statutory Text
1. Any license issued under this article shall state the name and address of the licensee, and if the licensee be a co-partnership or association, the names of the members thereof, and if a corporation the date and place of its incorporation. 2. Such license or licenses shall be kept conspicuously posted in the office of the licensee and, where such licensee has a public internet presence, on the website or mobile application of the licensee and shall not be transferable or assignable.
G-01 AI Governance Program & Documentation · G-01.6 · Deployer · Automated Decisionmaking
State Tech. Law § 516(1)-(3)
Plain Language
Every operator of a licensed high-risk AI system must establish an ethics and risk management board of at least five individuals. Board members must be independent — they cannot be members, officers, or directors of the operator's entity and need not be employed by the operator. The board must assess the ethical implications of all possible use cases (intended and unintended, likely and unlikely) and the system's current operational outcomes. Entity operators with multiple licensed systems need only one board. The board must adopt its own governance rules, which cannot conflict with the statute.
Statutory Text
1. Every operator of a licensed high-risk advanced artificial intelligence system or systems shall establish an ethics and risk management board composed of no less than five individuals who shall have the responsibility to assess the ethical implications of all possible use cases of the system, whether such use cases are intended or unintended, and whether likely or unlikely to be used, and the current operational outcomes of the system. Such operator, other than an operator who is a natural person, operating more than one high-risk advanced artificial intelligence system with a supplemental license shall not be required to have more than one ethics and risk management board for each system. 2. No member of an ethics and risk management board shall be a member, officer, or director within the operator's entity. No member shall be required to be employed by the operator. 3. Such board shall adopt rules governing its decision-making processes, duties and responsibilities. Such rules shall not conflict with the provisions of this article.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated Decisionmaking
State Tech. Law § 516(4)(a)-(h), (5)
Plain Language
The ethics and risk management board must submit an annual comprehensive report to the Secretary for each licensed high-risk AI system. The report must include: all possible use cases (intended/unintended, likely/unlikely), a thorough risk assessment for each use case covering privacy, security, fairness, economic, societal, and environmental impacts, an evaluation of whether known use cases should be constrained or banned, a mitigation plan for each identified risk, a review of all incidents and failures in the past year, user education plans considering varying digital literacy, disclosure of board conflicts of interest, and a compliance update. Board members who make false statements, fail to disclose conflicts, or misrepresent risks face criminal liability — misdemeanor punishable by up to $500 fine and/or 6 months imprisonment.
Statutory Text
4. Annually, the ethics and risk management board of each operator shall submit to the secretary a comprehensive report for each licensed high-risk advanced artificial intelligence system which consists of the following: (a) All possible use cases, whether intended or unintended, whether likely or unlikely. (b) A thorough risk assessment for each use case, considering and evaluating the potential for harm, irrespective of the probability of such risk materializing. This shall include, but not be limited to, the system's potential impact on privacy, security, fairness, economic implications, societal well-being, and safety of persons and the environment. (c) A detailed evaluation of known use cases of the system by users, exploring whether certain applications ought to be constrained or banned due to ethical considerations. This shall include an assessment of the operator's capacity to impose such constraints on use cases. (d) A mitigation plan for each identified risk, including preemptive measures, monitoring processes, and responsive actions. This shall also include a communication strategy to inform users and stakeholders about potential risks and steps taken to mitigate them. (e) A comprehensive review of any incidents or failures of the system in the past year, detailing the circumstances, impacts, measures taken to address the issue, and modifications made to prevent such incidents in the future. (f) Any existing attempts to educate users and, based on the existing use of the system by users, a detailed plan on how the operator intends to inform and instruct users on the safe and ethical use of the system, considering varying levels of digital literacy among users. (g) A disclosure of any conflicts of interest within the ethics board, which could potentially influence the board's decisions and recommendations. This shall include measures to manage and resolve such conflicts. (h) An update on the measures taken by the operator to ensure the system's adherence to existing laws, regulations, and ethical guidelines related to artificial intelligence. 5. In addition to any applicable civil penalties pursuant to section five hundred eight of this article, a member of an ethics and risk management board who makes a false statement, fails to disclose conflicts of interest or misrepresents the risks or severity of the risks posed by a system in the performance of their duties as a member of such board, shall be guilty of a misdemeanor and, upon conviction, shall be fined not more than five hundred dollars or imprisoned for not more than six months or both, in the discretion of the court.
S-01 AI System Safety Program · S-01.4S-01.7 · Deployer · Automated Decisionmaking
State Tech. Law § 517(1)-(4)
Plain Language
The Secretary periodically evaluates the source code and outcomes of each licensed high-risk AI system to determine compliance, with review frequency based on system risk, complexity, update frequency, and compliance history. After review, the Secretary issues binding recommendations for alignment with the ethical code of conduct, prohibited systems restrictions, and source code modification procedures. Operators must consult with the Secretary, provide a binding implementation plan and timeline, and may request amendments for unexpected circumstances (subject to 30-day Secretary approval). The Secretary monitors implementation and may impose fines for non-compliance. While the Secretary initiates the review, the operator has an affirmative obligation to cooperate, develop the compliance plan, and implement recommendations.
Statutory Text
1. The secretary shall conduct periodic evaluations of the source code and outcomes associated with each high-risk advanced artificial intelligence system. These examinations shall determine whether the system is in compliance with this article. The timing and frequency of these reviews shall be determined at the secretary's discretion, taking into account the potential risk posed by the system, the complexity of the system, the frequency of updates and upgrades, the complexity of such updates and upgrades, and any previous issues of non-compliance. 2. Upon completion of the review, the secretary is empowered to make binding recommendations to the operator to ensure the system's functionality and outcomes are aligned with the principles in the advanced artificial intelligence ethical code of conduct pursuant to section five hundred twenty-nine of this article, restrictions on prohibited artificial intelligence systems pursuant to section five hundred thirty of this article, and limitations and procedures for source code modifications, updates, upgrades, and rewrites pursuant to section five hundred nineteen of this article. 3. Following receipt of the secretary's recommendations, the operator shall consult with the secretary to determine the feasibility of implementing the recommendations and the time frame in which such recommendations can be implemented to ensure full compliance with the secretary's recommendations. The operator shall provide a detailed plan outlining how the recommendations will be addressed, along with a timeline for their implementation. The detailed plan shall be binding on the operator; provided however that where an unexpected occurrence arises which causes changes to such plan, the operator shall be entitled to extend such timeline or alter such plans where such operator notifies the secretary in writing regarding the unexpected occurrence and, within such writing, sets forth amendments to the detailed plan and timeline. The secretary shall have thirty days to approve or reject such amendments. Where such amendments are rejected, the operator shall continue with their original plan and timeline. 4. The secretary shall monitor the operator's compliance with such recommendations and may impose fines and other penalties pursuant to the provisions of this article for non-compliance that the secretary shall deem just and proportionate to the violation.
Other · Developer · Automated Decisionmaking
State Tech. Law § 518(1)-(5)
Plain Language
Developers of high-risk AI systems (whether licensed or not) are prohibited from willfully or negligently allowing their source code to become uncontained — i.e., reproduced so widely that it becomes practically impossible to control. Written authorization from the Secretary is required for any permissible uncontainment. Criminal penalties are graduated: willful violation is a class E felony; negligent violation is a class A misdemeanor; and uncontainment of financial systems or prohibited systems is a class C felony. A knowledge defense is available — individuals with no explicit or implicit knowledge of the risk or circumstances that caused uncontainment are not liable.
Statutory Text
1. No licensee or non-licensee who develops a high-risk advanced artificial intelligence system shall willfully or negligently uncontain their source code except where authorized by the secretary in writing. 2. Any member, officer, director or employee of an entity who willfully violates subdivision one of this section shall be guilty of a class E felony. 3. Any member, officer, director or employee of an entity who negligently violates subdivision one of this section shall be guilty of a class A misdemeanor. 4. Where any member, officer, director or employee or an entity willfully or negligently uncontains a high-risk advanced artificial intelligence system described in paragraph (f) of subdivision two of section five hundred one of this article or a prohibited high-risk advanced artificial intelligence system as described in section five hundred thirty of this article shall be guilty of a class C felony. 5. The provisions of this section shall not be construed as imposing liability on any member, officer, director or employee who had no explicit or implicit knowledge of the risk or circumstances that caused the uncontainment of the high-risk advanced artificial intelligence system.
Other · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 519(1)-(5)
Plain Language
Licensees must obtain written pre-approval from the Secretary before implementing any modification (altering decision-making) or upgrade (adding features) to their high-risk AI system's source code. The licensee must submit a written description of the change's purpose, new functions, reasons, and risk assessment. The Secretary has 30 business days to approve (extendable by 30 days); if no response, the change is deemed approved. For rewrites (substantial changes effectively creating a new system), the Secretary reviews in the same manner as a new application with a 180-business-day timeline (extendable by 180 days). All changes must be conducted in a pre-production environment. Minor updates (bug fixes, security patches, cosmetic changes) are exempt. Rejections must include written reasons and corrective steps.
Statutory Text
1. Where a licensee intends to modify or upgrade the source code of their high-risk advanced artificial intelligence system, such licensee shall be required to inform the secretary of such modification or upgrade and shall be prohibited from implementing such modification or upgrade in an accessible version of the system without express consent of the secretary in writing. This section shall not apply to source code updates. 2. A licensee shall, in writing to the secretary, set forth the purpose of the modification or upgrade, the new functions added to the system or the functions modified, the reason for the modification or upgrade, and an assessment of new risks or risks that may be more probable as a result of the modification or upgrade. The secretary shall, upon receipt of notice, have thirty business days to provide the licensee with approval of the modification or upgrade. Where approval is not received within thirty business days, absent an extension in writing which shall not exceed thirty additional business days, the modification or upgrade shall be deemed approved. Nothing in this subdivision shall be construed as limiting the ability of the secretary to take any action they are authorized to take in relation to the approved modification or upgrade. Where the secretary rejects the modification or upgrade, the secretary shall set forth in writing the reasons for the rejection and steps that the licensee can take to receive approval. Where the secretary approves the modification or upgrade, the licensee may immediately implement such modification or upgrade in a publicly accessible version. 3. A licensee who rewrites the source code of its system shall comply with the same standards set forth in subdivisions one and two of this section provided however that the secretary shall examine such source code in the same manner as a new application and shall provide a letter of approval or rejection upon completion of such review within one hundred eighty business days of receipt of such notices except where the secretary requires an extension of time, then an extension of no more than one hundred eighty days shall be authorized. Where the secretary rejects the rewrite, such letter of rejection shall state the reasons for the rejection and steps that the licensee can take to correct such rejection, if any. Where the secretary approves the modification or upgrade, the licensee may immediately implement such modification or upgrade in a publicly accessible version. 4. All modifications, upgrades, and rewrites shall be conducted in a pre-production environment, which shall mean any stage prior to the accessible version. 5. For purposes of this section: (a) "Modify" shall mean altering the source code of the system to alter the way by which the system, or any features within the system, makes decisions. (b) "Upgrade" shall mean altering the source code of the system which gives it new features or functions. (c) "Rewrite" shall mean a change in the source code to such a substantial degree that: (i) it effectively results in a new version of the system; or (ii) the change nullifies all or a substantial amount of the initial findings of the secretary in the operator's original application. (d) "Update" shall mean a change to the source code that includes minor enhancements, improvements, modifications, error corrections, cosmetic changes, or any other change intended to increase the functionality, compatibility, security or performance of the system. (e) "Accessible version" shall mean a version of the software that is available to the public or for private use or that is presently operating within its designated operational parameters.
R-01 Incident Reporting · R-01.1 · Deployer · Automated Decisionmaking
State Tech. Law § 520(1)-(2)
Plain Language
Licensees must report system malfunctions to the Department and, where applicable, to a relevant law enforcement agency or governmental entity. A malfunction is reportable when it lasts long enough that it had the capacity to, or did, harm a person. For systems that interact with law enforcement or government systems, engage in government functions, or operate as weapons, the Department may impose additional reporting requirements to specific agencies upon issuing the license. No specific reporting timeframe is specified — the statute says the duty exists but delegates timing details to the Department.
Statutory Text
1. A licensee shall have the duty to notify the department and, if applicable, a relevant law enforcement agency or governmental entity where the licensee's system fails to operate as intended for any significant period of time. A period of time is deemed "significant" for purposes of this section where the period of time that the malfunction occurred had the capacity to or has harmed a person or persons. 2. A licensee shall have the duty to notify a relevant law enforcement agency or governmental entity of a malfunction where designated by the department upon receipt of a license. The secretary shall issue such a requirement upon the licensee where such systems interact with law enforcement systems or the systems of a government agency, engage in law enforcement functions or the functions of a government agency, or where such systems operate, in whole or in part, or are, a weapon.
D-01 Automated Processing Rights & Data Controls · D-01.5 · Deployer · Automated DecisionmakingBiometrics
State Tech. Law § 522(1)-(3)
Plain Language
Licensees may share information and source code with third parties, but when shared information includes biometric data (faceprints, voiceprints, fingerprints, gaitprints, irisprints, psychological profiles, or other identifying body/mind data), the receiving third party becomes jointly liable with the licensee for any harm or violations under the article. The Secretary may prohibit specific persons from accessing a licensee's information or source code, with written justification required. This applies only to information received or generated by the licensee and source code the licensee created — not to third-party integrations.
Statutory Text
1. Licensees shall be permitted to share information and source code with any third party, provided however, that where information is biometric information such party shall be jointly liable for any harm or violations under this article with the licensee. The secretary may, in their discretion, prohibit any person from accessing the information or source code of a licensee provided however that the secretary shall provide a written justification for such a prohibition. 2. For purposes of this section, "biometric information" shall include a person's: (a) faceprint; (b) voiceprint; (c) fingerprint; (d) gaitprint; (e) irisprint; (f) psychological profile; or (g) any other data related to a person's body or mind that can be used to identify a person. 3. This section shall only apply to the sharing of information received or generated by the licensee or source code created by the licensee and shall not apply to a third party integrating their systems with the licensee.
Other · Deployer · Automated Decisionmaking
State Tech. Law § 523(1)-(2)
Plain Language
Third-party systems that integrate with a licensed high-risk AI system must obtain a certificate of compliance from the Department before integration. The certificate requires the Department to assess the third-party system and confirm it meets cybersecurity standards. If a third-party system accesses the licensee's system to acquire new high-risk AI capabilities for itself, it must obtain its own license — not just a certificate. Only one certificate per third-party system is needed regardless of how many licensees it integrates with.
Statutory Text
1. Non-licensee third-party systems may integrate with a licensee under the following conditions: (a) Where a third-party system assists in the proper functioning of the licensee or where such system provides additional services to the licensee's service-offerings, such a system shall not be required to obtain a license but shall be required to obtain a certificate of compliance in accordance with this section. (b) No third-party system may access the system of a licensee to provide itself with new high-risk advanced artificial intelligence capabilities without first obtaining a license. 2. Every third-party system which integrates with a licensee shall, prior to integration, apply for and receive a certificate of compliance. Such certificate shall be issued by the department and shall only be issued where such third-party system is assessed by the department and the department finds it conforms to the cybersecurity standards set by the office. The secretary shall set the rules and regulations regarding the application and requirements of receiving a certificate of compliance. This section shall not be construed as requiring any third-party system to receive more than one certificate of compliance.
G-01 AI Governance Program & Documentation · G-01.3G-01.4 · Deployer · Automated Decisionmaking
State Tech. Law § 524
Plain Language
Every licensed high-risk AI system must automatically generate chronological logs every time it operates. Logs must record significant or notable occurrences, actions, and anomalies. The Secretary sets detailed standards for: what events must be logged, log format, who may access logs, encryption and cybersecurity protocols, and preservation and disposal procedures. Logs must be preserved for 10 years from generation and are subject to Secretary inspection. This is one of the longest recordkeeping retention periods in AI regulation — most jurisdictions require 2-5 years.
Statutory Text
Every time a licensee's system operates it shall automatically generate a log. Standards related to the specific types of events that are required to be logged, the format in which logs must be kept, the individuals or entities permitted to access logs and the conditions governing such access, the encryption and cybersecurity protocols to be applied to logs, the procedures for both the preservation and disposal of logs, and any other actions pertinent to log management shall conform to the standards set by the secretary. Such logs shall be preserved for a period of ten years from the date they are generated and shall be subject to inspection under section five hundred twenty-six of this article.
Other · Deployer · Automated Decisionmaking
State Tech. Law § 525
Plain Language
Every licensee must maintain internal controls — effectively a 'kill switch' — capable of safely and indefinitely shutting down the entire system or a major portion of it within a reasonable time after activation. This is a continuous readiness requirement, not a one-time implementation. The standard is 'reasonable time,' which will likely vary based on system complexity and risk profile.
Statutory Text
Every licensee shall have in place internal controls that, within a reasonable time following initiation, can safely and indefinitely cease the operation of the system or a major part of the system.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated Decisionmaking
State Tech. Law § 526(1)-(4)
Plain Language
The Secretary has broad investigative and examination authority, including the power to compel production of all relevant books, records, accounts, documents, source code, and logs, and to examine persons under oath. Examination expenses are assessed to and paid by the examined licensee. All examination and investigation reports are confidential and not subject to subpoena unless the Secretary determines publication serves justice and the public interest. Operators must maintain their records in a form that can be produced to the Secretary, and must bear the financial cost of regulatory examinations.
Statutory Text
1. The secretary shall have the power to make such investigations as the secretary shall deem necessary to determine whether any operator or any other person has violated any of the provisions of this article, or whether any licensee has conducted itself in such manner as would justify the revocation of its license, and to the extent necessary therefor, the secretary may require the attendance of and examine any person under oath, and shall have the power to compel the production of all relevant books, records, accounts, documents, source code, and logs. 2. The secretary shall have the power to make such examinations of the books, records, accounts, documents, source code, and logs used in the business of any licensee as the secretary shall deem necessary to determine whether any such licensee has violated any of the provisions of this article. 3. The expenses incurred in making any examination pursuant to this section shall be assessed against and paid by the licensee so examined, except that traveling and subsistence expenses so incurred shall be charged against and paid by licensees in such proportions as the secretary shall deem just and reasonable, and such proportionate charges shall be added to the assessment of the other expenses incurred upon each examination. Upon written notice by the secretary of the total amount of such assessment, the licensee shall become liable for and shall pay such assessment to the secretary. 4. All reports of examinations and investigations, and all correspondence and memoranda concerning or arising out of such examinations or investigations, including any duly authenticated copy or copies thereof in the possession of any licensee or the department, shall be confidential communications, shall not be subject to subpoena and shall not be made public unless, in the judgment of the secretary, the ends of justice and the public advantage will be subserved by the publication thereof, in which event the secretary may publish or authorize the publication of a copy of any such report or other material referred to in this subdivision, or any part thereof, in such manner as the secretary may deem proper.
G-01 AI Governance Program & Documentation · G-01.3G-01.4 · Deployer · Automated Decisionmaking
State Tech. Law § 527(1)-(2)
Plain Language
Every operator must maintain all books, records, source code, and logs as required by the Secretary, with minimum requirements including all system-generated logs and a backup of every version of the system, stored securely as prescribed. Operators must file an annual report with the Secretary covering business and operations for the preceding calendar year, subscribed under penalties of perjury. The Secretary may also require additional regular or special reports at any time. All reports must be affirmed as true under penalty of perjury. The system version backup requirement is distinctive — operators must maintain a historical archive of every version of their AI system.
Statutory Text
1. Every operator shall maintain such books, records, source code, and logs as the secretary shall require provided however that every operator shall, at least, maintain a copy of all logs generated from the system as well as a backup of every version of the system which shall be stored in a safe manner as prescribed by the secretary. 2. By a date to be set by the secretary, each operator shall annually file a report with the secretary giving such information as the secretary may require concerning the business and operations during the preceding calendar year of the operator within the state under the authority of this article. Such report shall be subscribed and affirmed as true by the operator under the penalties of perjury and be in the form prescribed by the secretary. In addition to such annual reports, the secretary may require of operators such additional regular or special reports as the secretary may deem necessary to the proper supervision of operators under this article. Such additional reports shall be in the form prescribed by the secretary and shall be subscribed and affirmed as true under the penalties of perjury.
Other · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 529
Plain Language
A legally binding ethical code of conduct applies to all persons (licensed or not) who develop or operate high-risk AI systems. The code establishes nine principles: Respect (no undue manipulation), Equity (equitable outcomes without bias or discrimination), Accountability (accountability for impacts with clear harm-addressing mechanisms), Care (no harm without justification), Trust (privacy and data security), Inclusivity (inclusive design), Oversight (meaningful human oversight), Notice (transparency to affected persons), and Safety (robust, secure, reliable systems). While framed as principles, this code is legally binding and violations are subject to the article's enforcement provisions, including civil penalties and license revocation.
Statutory Text
The following ethical code of conduct shall be binding on all licensees and non-licensees who develop or operate a high-risk advanced artificial intelligence system: Respect: Artificial intelligence systems should respect human autonomy and not unduly influence or manipulate individuals' behavior or decisions. Equity: An artificial intelligence system should provide equitable outcomes, irrespective of any characteristics protected by law. They should not perpetuate existing biases, discrimination, or disparities. Accountability: Persons that design, develop, deploy, or use artificial intelligence systems should be held accountable for the impacts and outcomes of these systems except where the law provides otherwise. Clear mechanisms for addressing harms and violations of law should be in place. Care: Artificial intelligence systems should not cause harm or adversely affect individuals, society, or the environment without legal justification. Trust: Artificial intelligence systems should respect individuals' privacy rights, and securely handle personal and sensitive data in accordance with applicable laws and regulations. Inclusivity: Artificial intelligence systems should be designed, developed, and used in ways that are inclusive, serving a diverse range of users and contexts. Oversight: There should always be meaningful human oversight of artificial intelligence systems to ensure ethical use and decision-making. Notice: The operations, decision-making processes, and use of artificial intelligence systems should, where feasible, be made known to persons affected by them. Safety: Artificial intelligence systems should be robust, secure, and reliable. They should have mechanisms in place to prevent misuse or harmful outcomes.
S-02 Prohibited Conduct & Output Restrictions · S-02.1 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 530(1)(a)-(b)
Plain Language
It is categorically prohibited to develop or operate within New York any AI system that: (a) deploys subliminal techniques operating beyond conscious awareness to materially distort behavior causing or likelyto cause physical or psychological harm, or that leverages group vulnerabilities to similar ends; or (b) inflicts physical or emotional harm on individuals without valid law enforcement or self-defense justification. These prohibitions apply regardless of whether the prohibited function is the system's main function. Violation by a knowing operator is a class D felony plus civil penalties equal to the greater of amount earned or damages caused.
Statutory Text
No person shall develop, in whole or in part, or operate an artificial intelligence system within the state where such a system performs any of the following, whether or not it is the system's main function: (a) the deployment of subliminal techniques that operate beyond an individual's conscious awareness, with the express purpose of materially distorting an individual's behavior in such a manner that leads to, or possesses a high likelihood of leading to, physical or psychological harm to that individual or another, or that leverages the vulnerabilities of a defined group of individuals to similar ends; (b) the infliction of physical or emotional harm upon individuals without any valid law enforcement or self-defense purpose or justification;
S-02 Prohibited Conduct & Output Restrictions · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 530(1)(c)-(d)
Plain Language
It is prohibited to develop or operate within New York any AI system that: (c) predicts individual future actions or behaviors and then takes reactive actions based on those predictions that, without legal justification, infringe on the individual's liberty, emotional, psychological, or financial interests — this is effectively a prohibition on predictive policing and predictive behavioral response systems that lack legal authorization; or (d) acquires, retains, disseminates, or accesses sensitive personal information or non-public data in violation of existing privacy, security, and hacking laws. Subdivision (d) largely cross-references existing law rather than creating an independent prohibition.
Statutory Text
(c) the prediction of an individual's future actions or behaviors, followed by subsequent reactions based on these predictions, carried out in such a way that, without legal justification, infringes upon or compromises the individual's liberty, emotional, psychological, or financial interests; (d) the unauthorized acquisition, retention, or dissemination of or access to sensitive personal information or non-public data in violation of applicable data privacy, security, and hacking laws;
S-02 Prohibited Conduct & Output Restrictions · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 530(1)(e), (2)-(7)
Plain Language
Autonomous weapons systems that inflict harm on persons, property, or the environment without meaningful human supervision or control are categorically prohibited. 'Meaningful human supervision or control' means the ability to actively manage, intervene, or override the system's functions. The Secretary may demand immediate cessation of development or operation of any prohibited system, which is binding unless the person petitions for a hearing (during which the system must remain shut down). Knowing operation by officers, directors, or employees is a class D felony with civil penalties. A knowledge defense exists but is rebuttably presumed overcome once the Secretary issues a cease-demand. A narrow exception permits state-authorized development under substantial continuous state oversight after public hearing and comment.
Statutory Text
(e) the implementation of any form of autonomous weapon system designed to inflict harm on persons, property, or the environment that lack meaningful human supervision or control. "Meaningful human supervision or control" shall mean the ability to actively manage, intervene, or override the autonomous weapon system's functions. 2. Where the secretary discovers the development or operation of a prohibited artificial intelligence system, the secretary may, in writing, demand that the person who is developing or operating such system cease development or operation of or access to such a system within a period of time as the secretary deems necessary to prevent the system from widespread use or, if the system is operational or accessible to persons for use, to ensure the system is properly terminated in such a way to minimize risks of harm to individuals, society, or the environment. A demand made pursuant to this section shall be finally and irrevocably binding on the person unless the person against whom the demand is made shall, within such period of time set by the secretary, after the giving of notice of such determination, petition the department for a hearing to determine the legal findings of the secretary. The person developing or operating such a prohibited system shall, prior to petition, cease development, operation, and access to the system until and unless such determination is favorable to the person. Such determination may be appealed by any party as of right. 3. The secretary shall not grant a license pursuant to this article to any high-risk advanced artificial intelligence system described under this section except as described in subdivision seven of this section. 4. Any member, officer, director or employee of an operator of any entity who knowingly publicly or privately operates any system described in this section shall be guilty of a class D felony and shall incur a civil penalty of the amount earned from the creation of the prohibited system or the amount of damages caused by the system, whichever is greater. 5. This section shall not be construed as imposing liability on any member, officer, director or employee who had no explicit or implicit knowledge of the prohibited high-risk advanced artificial intelligence system provided however that where the secretary sends a demand to cease the development, operation, or access to such system all members, officers, and directors shall be rebuttably presumed to have knowledge of the prohibited high-risk advanced artificial intelligence system. 6. This section shall be construed as prohibiting the development of a prohibited high-risk advanced artificial intelligence system or making such a system accessible to persons in the state of New York. 7. Notwithstanding subdivision one of this section, a person may develop a prohibited high-risk advanced artificial intelligence system where authorized by the secretary, provided that such system is developed and used only by the state or with substantial, continuous oversight by the state and such system is authorized only after public hearing and comment in accordance with section five hundred nine of this article.
Other · Automated Decisionmaking
State Tech. Law § 521(1)-(2)
Plain Language
The Secretary may impose unique regulatory requirements on AI systems that pose state or national security risks, assessed case-by-case. The provision defines criteria for national security risk (critical infrastructure disruption, conflict escalation, democratic process undermining, classified information access, population harm, financial market impact, environmental damage, social fabric harm) but does not itself impose specific compliance obligations on operators — it grants rulemaking authority to the Secretary. Operators' obligations will depend on subsequent regulations.
Statutory Text
1. The secretary may, by regulation, designate unique requirements for systems which, in the secretary's discretion, pose a risk to state or national security. Such systems shall be assessed on a case-by-case basis and shall not be liberally construed as including any system that, where used improperly, inherently possesses the ability to harm persons or property. 2. A high-risk advanced artificial intelligence system shall be deemed to pose a risk to state or national security where the system's malfunctioning or misuse poses a high risk of: (a) disrupting critical infrastructure; (b) triggering or escalating existing conflicts; (c) undermining or impacting the democratic process; (d) causing unauthorized access to classified information as designated by a relevant governmental entity; (e) harming a significant portion of the population or a specific segment of the population; (f) negatively impacting financial markets or economic stability; (g) causing consequential or irreversible damage to the environment; or (h) causing significant harm to the social fabric.
Other · Automated Decisionmaking
State Tech. Law § 508(1)-(6)
Plain Language
This provision establishes the penalty and enforcement framework but creates no independent compliance obligation. Civil penalties are capped at the greater of amount gained or actual damages, recovered by the Department or the AG. The AG may seek equitable and injunctive relief. False material statements in applications or reports are a misdemeanor (up to $500 fine and/or 6 months). Misrepresenting licensure status is prohibited. Penalties are proportionate to the violation.
Statutory Text
1. Any person who violates, disobeys or disregards any term or provision of this article or of any lawful notice, order or regulation pursuant thereto for which a civil or criminal penalty is not otherwise expressly prescribed in this article by law, shall be liable to the people of the state for a civil penalty of not to exceed the amount gained from such violation, or the actual damages caused by such violation whichever is greater. In assessing the civil penalty under this subdivision, the department, as may be applicable shall take into consideration the nature of such violation and shall assess a penalty that is proportionate to the violation. 2. The penalty provided for in subdivision one of this section shall be recovered by an action or proceeding in a court of competent jurisdiction brought by the department, as may be applicable, or by the attorney general at the request of the department. 3. Such civil penalty may be released or compromised by the department, as may be applicable, before the matter has been referred to the attorney general, and where such matter has been referred to the attorney general, any such penalty may be released or compromised and any action or proceeding commenced to recover the same may be settled and discontinued by the attorney general with the consent of the department. 4. It shall be the duty of the attorney general upon the request of the department, as may be applicable, to bring an action or proceeding against any person who violates, disobeys or disregards any term or provision of this article or of any lawful notice, order or regulation pursuant thereto for any relief authorized under this article, including equitable and/or injunctive relief and the recovery of civil penalties; provided, however, that the department or the secretary shall furnish the attorney general with such material, evidentiary matter or proof as may be requested by the attorney general for the prosecution of such an action or proceeding. 5. Any person who knowingly makes any incorrect statement of a material fact in any application, report or statement filed pursuant to this article, or who knowingly omits to state any material fact necessary to give the director any information lawfully required by the secretary or refuses to permit any lawful investigation or examination, shall be guilty of a misdemeanor and, upon conviction, shall be fined not more than five hundred dollars or imprisoned for not more than six months or both, in the discretion of the court. 6. No person shall make, directly or indirectly, orally or in writing, or by any method, practice or device, a representation that such entity is licensed under the law except that a licensee under this chapter may make a representation that the licensee is licensed as a high-risk advanced artificial intelligence system under this article.
Other · Automated Decisionmaking
State Tech. Law § 515(1)-(5)
Plain Language
This provision defines the grounds and procedures for license revocation and suspension but creates no independent compliance obligation. Licenses may be revoked or suspended for: failure to comply with reporting requirements, violation of any article provision, allowing uncertified third-party integration, existence of disqualifying conditions, or failure to pay required fees. The Secretary may summarily suspend a license for up to 30 days without a hearing where there is substantial risk of public harm or uncontainment. Voluntary surrender does not extinguish liability for prior acts.
Statutory Text
1. A license granted pursuant to this section may not be renewed, and may be revoked or suspended by the secretary upon a finding that: (a) the licensee has not complied with reporting requirements; (b) the licensee has violated any provision of this article; (c) the licensee knowingly allowed a non-certified third-party system to integrate with the licensee's system; (d) any fact or condition exists which, if it had existed at the time of the original application for such license, clearly would have warranted the secretary's refusal to issue such license; or (e) the licensee has failed to pay any sum of money lawfully demanded by the secretary or to comply with any demand, ruling or requirement of the secretary. 2. Any licensee may surrender any license by delivering to the secretary written notice that the licensee thereby surrenders such license, but such surrender shall not affect such licensee's civil or criminal liability for acts committed prior to such surrender. 3. Every license issued hereunder shall remain in force and effect until the same shall have been surrendered, revoked or suspended in accordance with the provisions of this article, but the secretary shall have authority to reinstate suspended licenses or to issue new licenses to a licensee whose license or licenses shall have been revoked if no fact or condition then exists which clearly would have warranted the secretary's refusal to issue such license. 4. Whenever the secretary shall revoke or suspend a license issued pursuant to this article, the secretary shall forthwith execute a written order to that effect. 5. The secretary may, on good cause shown, or where there is a substantial risk of public harm or substantial risk of a system becoming uncontained, without notice and a hearing, suspend any license issued pursuant to this article for a period not exceeding thirty days, pending investigation.