Developers and deployers of high-risk AI systems must conduct documented safety evaluations before deployment, conduct adversarial testing to identify misuse potential and failure modes, and maintain ongoing safety controls. Pre-deployment evaluation does not permanently satisfy this obligation — post-deployment monitoring and re-evaluation are continuing requirements.
In compliance with the rules adopted by the attorney general pursuant to section 44-1383.03, a chatbot provider shall: 1. On a monthly basis: (a) Evaluate its chatbot for potential risk of harm to users. (b) Make information about its chatbot publicly available on its website. 2. Mitigate any risk of harm to users.
(a) Annually perform and document a comprehensive risk assessment to identify any child safety risk posed by the design, configuration, and operation of the companion chatbot that assesses all of the following: (1) The likelihood of a covered harm occurring to users. (2) Differential risks across age groups and developmental stages. (3) Known vulnerabilities of children. (4) Empirical data from actual use. (5) Relevant academic research and regulatory guidance. (b) Take and document measures that reasonably mitigate any child safety risk identified in a risk assessment conducted pursuant to subdivision (a).
Any health care provider that uses an artificial intelligence system to make, or be a substantial factor in making, a consequential decision shall: (1) Monitor the usage of artificial intelligence systems to make, or be a substantial factor in making, consequential decisions; (2) Conduct regular performance evaluations of the artificial intelligence systems, including the assessment of: (A) Potential biases; (B) Risks to the safety and rights of patients, including the confidentiality of personal data; and (C) Mitigation strategies for any identified risks; (3) Implement procedures to address any deficiencies identified through the monitoring or performance evaluations, including the suspension or recalibration of any artificial intelligence system;
A deployer of a chatbot shall do all of the following: 1. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the chatbot may cause a user in a manner that prioritizes the safety and well-being of users over the deployer's interests.
A deployer of a public-facing chatbot shall do all of the following: a. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the public-facing chatbot may cause a user in a manner that takes commercially reasonable steps to protect the safety and well-being of users.
3. A deployer shall only make a therapeutic chatbot available for a minor's use or purchase if all of the following apply: a. The therapeutic chatbot was recommended for the minor's use by an individual licensed under chapter 154B or 154D after performing an evaluation of the minor. b. The therapeutic chatbot's developer has significant documentation of how the public-facing chatbot was tested. c. Peer-reviewed clinical trial data exists demonstrating the therapeutic chatbot would be a safe, effective tool for the minor's diagnosis, treatment, mitigation, or prevention of a mental health condition. d. The therapeutic chatbot's deployer provided clear disclosures of the therapeutic chatbot's functions, limitations, and data privacy policies to the individual recommending the therapeutic chatbot under paragraph "a", and to the minor's parents, guardians, or custodians. e. The therapeutic chatbot's deployer developed and implemented protocols for testing the therapeutic chatbot for risks to users, identifying possible risks the therapeutic chatbot poses to users, mitigating risks the therapeutic chatbot poses to users, and quickly rectifying harm the therapeutic chatbot may have caused a user.
A deployer of a chatbot shall do all of the following: 1. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the chatbot may cause a user in a manner that prioritizes the safety and well-being of users over the deployer's interests.
the compliance of the system with the Artificial Intelligence Risk Management Framework released by the National Institute of Standards and Technology on January 26, 2023, or a successor framework;
(3) IF, WITHIN A 6–MONTH PERIOD, MORE THAN A SPECIFIED PERCENTAGE, AS DETERMINED BY THE COMMISSIONER, OF A CARRIER'S ADVERSE DECISIONS MADE USING THE SAME ARTIFICIAL INTELLIGENCE, ALGORITHM, OR SOFTWARE TOOL RESULT IN A GRIEVANCE, THE CARRIER SHALL PROVIDE FOR A MODEL REVIEW PROCESS OF THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR SOFTWARE TOOL AND SUBMIT THE FINDINGS IN THE REPORT REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION.
(d) A licensees shall do all of the following: (1) Demonstrate effectiveness through peer-reviewed, controlled trials with appropriate validation studies done on appropriate sample sizes with real-world performance data. (2) Demonstrate effectiveness in a comparative analysis to human expert performance. (3) Meet minimum domain benchmarks as established by the Department.
The Office of Information Technology shall: (1) establish minimum requirements for an artificial intelligence safety test for artificial intelligence technology sold, developed, deployed, used, or offered for sale in this State that is conducted by an artificial intelligence company pursuant to subsection c. of this section, which requirements shall include but not be limited to: (a) an analysis of potential cybersecurity threats and vulnerabilities; (b) an analysis of an artificial intelligence technology's data sources and potential sources of bias, incorrect or inaccurate information, or violations of State or federal criminal, copyright, patent, or trade secret laws; and (c) descriptions of possible remedies or defensive measures that can be taken by the artificial intelligence company to address all potential cybersecurity threats and vulnerabilities, potential sources of bias, incorrect or inaccurate information, or potential violations of State or federal criminal, copyright, patent, or trade secret laws identified during the conducting of the safety test
The Office of Information Technology shall: (2) review each annual report required to be submitted by an artificial intelligence company pursuant to subsection c. of this section.
1. New York residents have the right to be protected from unsafe or ineffective automated systems. These systems must be developed in collaboration with diverse communities, stakeholders, and domain experts to identify and address any potential concerns, risks, or impacts. 2. Automated systems shall undergo pre-deployment testing, risk identification and mitigation, and shall also be subjected to ongoing monitoring that demonstrates they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.
3. If an automated system fails to meet the requirements of this section, it shall not be deployed or, if already in use, shall be removed. No automated system shall be designed with the intent or a reasonably foreseeable possibility of endangering the safety of any New York resident or New York communities. 4. Automated systems shall be designed to proactively protect New York residents from harm stemming from unintended, yet foreseeable, uses or impacts.
6. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, shall be performed and the results made public whenever possible.
§ 517. Source code and outcome review. 1. The secretary shall conduct periodic evaluations of the source code and outcomes associated with each high-risk advanced artificial intelligence system. These examinations shall determine whether the system is in compliance with this article. The timing and frequency of these reviews shall be determined at the secretary's discretion, taking into account the potential risk posed by the system, the complexity of the system, the frequency of updates and upgrades, the complexity of such updates and upgrades, and any previous issues of non-compliance. 2. Upon completion of the review, the secretary is empowered to make binding recommendations to the operator to ensure the system's functionality and outcomes are aligned with the principles in the advanced artificial intelligence ethical code of conduct pursuant to section five hundred twenty-nine of this article, restrictions on prohibited artificial intelligence systems pursuant to section five hundred thirty of this article, and limitations and procedures for source code modifications, updates, upgrades, and rewrites pursuant to section five hundred nineteen of this article. 3. Following receipt of the secretary's recommendations, the operator shall consult with the secretary to determine the feasibility of implementing the recommendations and the time frame in which such recommendations can be implemented to ensure full compliance with the secretary's recommendations. The operator shall provide a detailed plan outlining how the recommendations will be addressed, along with a timeline for their implementation. The detailed plan shall be binding on the operator; provided however that where an unexpected occurrence arises which causes changes to such plan, the operator shall be entitled to extend such timeline or alter such plans where such operator notifies the secretary in writing regarding the unexpected occurrence and, within such writing, sets forth amendments to the detailed plan and timeline. The secretary shall have thirty days to approve or reject such amendments. Where such amendments are rejected, the operator shall continue with their original plan and timeline. 4. The secretary shall monitor the operator's compliance with such recommendations and may impose fines and other penalties pursuant to the provisions of this article for non-compliance that the secretary shall deem just and proportionate to the violation.
§ 518. Willfully or negligently uncontaining high-risk source code. 1. No licensee or non-licensee who develops a high-risk advanced artificial intelligence system shall willfully or negligently uncontain their source code except where authorized by the secretary in writing. 2. Any member, officer, director or employee of an entity who willfully violates subdivision one of this section shall be guilty of a class E felony. 3. Any member, officer, director or employee of an entity who negligently violates subdivision one of this section shall be guilty of a class A misdemeanor. 4. Where any member, officer, director or employee or an entity willfully or negligently uncontains a high-risk advanced artificial intelligence system described in paragraph (f) of subdivision two of section five hundred one of this article or a prohibited high-risk advanced artificial intelligence system as described in section five hundred thirty of this article shall be guilty of a class C felony. 5. The provisions of this section shall not be construed as imposing liability on any member, officer, director or employee who had no explicit or implicit knowledge of the risk or circumstances that caused the uncontainment of the high-risk advanced artificial intelligence system.
§ 525. Internal controls; ceasing operation. Every licensee shall have in place internal controls that, within a reasonable time following initiation, can safely and indefinitely cease the operation of the system or a major part of the system.
§ 1711. Professional oversight requirement. 1. Any developer of an artificial intelligence technology intended for use in a professional domain regulated under title eight of the education law shall demonstrate that at least one professional domain expert has been directly and substantially involved in at least, but not limited to: (a) the technology design phase; (b) the data selection and training process; (c) validation and testing of system outputs; and (d) ongoing risk assessment and post-deployment evaluation. 2. The provisions of subdivision one of this section shall apply to artificial intelligence technology used in areas such as, but not limited to: (a) health care diagnostics, treatment recommendations, or patient monitoring; (b) legal decision-making or document generation; (c) financial advising or lending tools; (d) educational curriculum or assessment tools; (e) construction, architecture, or structural safety systems; and (f) public safety, law enforcement, or surveillance technologies.
1. A developer or deployer shall do the following: (a) take reasonable measures to prevent and mitigate any harm identified by a pre-deployment evaluation described in section one hundred three or an impact assessment described in section one hundred four of this article; (b) take reasonable measures to ensure that an independent auditor has all necessary information to complete an accurate and effective pre-deployment evaluation described in section one hundred three or an impact assessment described in section one hundred four of this article; (c) with respect to a covered algorithm, consult stakeholders, including any communities that will be impacted by the covered algorithm, regarding the development or deployment of the covered algorithm prior to the deploying, licensing, or offering the covered algorithm; (d) with respect to a covered algorithm, certify that, based on the results of a pre-deployment evaluation described in section one hundred three or an impact assessment described in section one hundred four of this article: (i) use of the covered algorithm is not likely to result in harm or disparate impact in the equal enjoyment of goods, services, or other activities or opportunities; (ii) the benefits from the use of the covered algorithm to individuals affected by the covered algorithm likely outweigh the harms from the use of the covered algorithm to such individuals; and (iii) use of the covered algorithm is not likely to result in a deceptive act or practice; (e) ensure that any covered algorithm of the developer or deployer functions at a level that would be considered reasonable performance by an individual with ordinary skill in the art; and in a manner that is consistent with its expected and publicly-advertised performance, purpose, or use; (f) ensure any data used in the design, development, deployment, or use of the covered algorithm is relevant and appropriate to the deployment context and the publicly-advertised purpose or use; and (g) ensure use of the covered algorithm as intended is not likely to result in a violation of this article.
(b) It shall be unlawful for a developer to knowingly offer or license a covered algorithm for any consequential action other than those evaluated in the pre-deployment evaluation described in section one hundred three of this article. (c) It shall be unlawful for a deployer to knowingly use a covered algorithm for any consequential action other than a use evaluated in the pre-deployment evaluation described in section one hundred three of this article, unless the deployer agrees to assume the responsibilities of a developer required by this article.
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol;
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: ... (e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
1. (a) Beginning on January first, two thousand twenty-seven, each developer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of a high-risk artificial intelligence decision system. In any enforcement action brought on or after such date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a developer used reasonable care as required pursuant to this subdivision if: (i) the developer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-six, and at least annually thereafter, the attorney general shall: (i) identify independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) publish a list of such independent third parties available on the attorney general's website.
1. (a) Beginning on January first, two thousand twenty-seven, each deployer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after said date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a deployer of a high-risk artificial intelligence decision system used reasonable care as required pursuant to this subdivision if: (i) the deployer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the deployer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-seven, and at least annually thereafter, the attorney general shall: (i) identify the independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) make a list of such independent third parties available on the attorney general's web site.
(d) Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and (e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
C. Deployers shall implement and maintain a Quality Assurance Program, as outlined in Section 4 of this act, to ensure the safe, effective, and compliant use of AI devices in patient care.
C. Deployers of an AI device shall conduct and document regular performance evaluations and risk assessments of the device. Such evaluations and assessments should be informed by invited feedback from qualified end-users and, when applicable, participation in national specialty society-administered AI assessment registries. Whenever AI device performance concerns are identified, deployers shall implement appropriate corrective actions to mitigate risk to patients.
D. Deployers shall have a diligent review and selection process for the deployed AI device.
F. Deployers shall continuously monitor the performance of all deployed AI devices, including assessing any impact on patient safety or the quality of patient care. G. In conducting performance monitoring described in subsection F of this section, deployers must participate in national specialty society-administered artificial intelligence assessment registries when feasible.
(5) The performance, use and outcomes of the artificial intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability. (7) The artificial intelligence-based algorithms must not create foreseeable, material risks of harm to the patient.
(7) The performance, use and outcomes of the artificial intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability. (9) The artificial intelligence-based algorithms must not create foreseeable, material risks of harm to the covered person.
(7) The performance, use and outcomes of the artificial intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability. (9) The artificial intelligence-based algorithms must not create foreseeable, material risks of harm to the enrollee.
(4) The procedures by which the supplier: (i) Conducts testing, prior to making the chatbot publicly available and regularly thereafter, to ensure that the output of the chatbot poses no greater risk to a consumer than that posed to an individual communicating with a human. (ii) Identifies reasonably foreseeable adverse outcomes to, and potentially harmful interactions with, consumers that could result from using the chatbot. (iii) Provides a mechanism for a consumer to report any potentially harmful interactions from the use of the chatbot. (iv) Implements protocols to assess and respond to risk of harm to consumers or other individuals. (v) Details actions taken to prevent or mitigate any adverse outcomes or potentially harmful interactions. (vi) Implements protocols to respond, as soon as practicable, to acute risks of physical harm. (vii) Reasonably ensures regular, objective reviews of safety, accuracy and efficacy, which may include internal or external audits. (viii) Provides consumers with instructions on the safe use of the chatbot. (ix) Prioritizes consumer mental health and safety over engagement metrics or profit. (x) Implements measures to prevent discriminatory treatment of consumers. (xi) Ensures compliance with the security and privacy provisions of 45 CFR Pts. 160 (relating to general administrative requirements) and 164 (relating to security and privacy), as if the supplier were a covered entity.
(7) The artificial-intelligence-based algorithms must not create foreseeable, material risks of harm to the patient.
(9) The artificial-intelligence-based algorithms must not create foreseeable, material risks of harm to the covered person.
(9) The artificial-intelligence-based algorithms must not create foreseeable, material risks of harm to the enrollee.
(C) In compliance with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40, a chatbot provider shall: (1) on a monthly basis: (a) evaluate its chatbot for potential risk of harm to users; and (b) make information about its chatbot publicly available on its website; and (2) mitigate any risk of harm to users.
(C) In compliance with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40, a chatbot provider shall: (1) on a monthly basis: (a) evaluate its chatbot for potential risk of harm to users; and (b) make information about its chatbot publicly available on its website; and (2) mitigate any risk of harm to users.
To be eligible for regulatory mitigation, a participant shall demonstrate to the office that: (a) the participant has the technical expertise and capability to responsibly develop and test the proposed artificial intelligence technology; (b) the participant has sufficient financial resources to meet obligations during testing; (c) the artificial intelligence technology provides potential substantial consumer benefits that may outweigh identified risks from mitigated enforcement of regulations; (d) the participant has an effective plan to monitor and minimize identified risks from testing; and (e) the scale, scope, and duration of proposed testing is appropriately limited based on risk assessments.
(a) Each developer or deployer of any inherently dangerous artificial intelligence system that could be reasonably expected to impact consumers shall exercise reasonable care to avoid any reasonably foreseeable risk arising out of the development of, intentional and substantial modification to, or deployment of an artificial intelligence system that causes or is likely to cause: (1) the commission of a crime or unlawful act; (2) any unfair or deceptive treatment of or unlawful impact on an individual; (3) any physical, financial, relational, or reputational injury on an individual; (4) psychological injuries that would be highly offensive to a reasonable person; (5) any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns of a person, if the intrusion would be offensive to a reasonable person; (6) any violation to the intellectual property rights of persons under applicable State and federal laws; (7) discrimination on the basis of a person's or class of persons' actual or perceived race, color, ethnicity, sex, sexual orientation, gender identity, sex characteristics, religion, national origin, familial status, biometric information, or disability status; (8) distortion of a person's behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm; or (9) the exploitation of the vulnerabilities of a specific group of persons due to their age or physical or mental disability in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm. (b) Each developer of an inherently dangerous artificial intelligence system shall document and disclose to any actual or potential deployer of the artificial intelligence system any: (1) reasonably foreseeable risk, including by unintended or unauthorized uses, that causes or is likely to cause any of the injuries as set forth in subsection (a) of this section; and (2) risk mitigation processes that are reasonably foreseeable to mitigate any injury as set forth in subsection (a) of this section.
(a) No developer shall offer, sell, lease, give, or otherwise place in the stream of commerce: (1) an inherently dangerous artificial intelligence system, unless the developer has conducted a documented testing, evaluation, verification, and validation of that system at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology (NIST); or (2) an artificial intelligence system that creates reasonably foreseeable risks pursuant to section 4193f of this subchapter, unless the developer mitigates these risks to the extent possible, considers alternatives, and discloses vulnerabilities and mitigation tactics to a deployer.
(c) Risk assessment. A chatbot provider shall on a monthly basis, according to metrics as set forth in rules adopted by the Attorney General pursuant to this subchapter, assess its chatbot for risks of harm to users and actively mitigate any risks of harm.
(e) A private entity in possession of a biometric identifier or biometric information shall: (1) Store, transmit, and protect from disclosure all biometric identifiers and biometric information using the reasonable standard of care within the private entity's industry; and (2) Store, transmit, and protect from disclosure all biometric identifiers and biometric information in a manner that is the same as or more protective than the way the private entity stores, transmits, and protects other confidential and sensitive information.