Developers and deployers of high-risk AI systems must conduct documented safety evaluations before deployment, conduct adversarial testing to identify misuse potential and failure modes, and maintain ongoing safety controls. Pre-deployment evaluation does not permanently satisfy this obligation — post-deployment monitoring and re-evaluation are continuing requirements.
(2) Certify annually to the department that: (i) use of artificial intelligence and the outcomes that it generates are reviewed on a periodic basis to maximize accuracy and reliability; and (ii) use of artificial intelligence in utilization review complies with the requirements of subsection (b).
In compliance with the rules adopted by the attorney general pursuant to section 44-1383.03, a chatbot provider shall: 1. On a monthly basis: (a) Evaluate its chatbot for potential risk of harm to users. (b) Make information about its chatbot publicly available on its website. 2. Mitigate any risk of harm to users.
On or before July 1, 2027, an operator shall do all of the following: (a) Annually perform and document a comprehensive risk assessment to identify any child safety risk posed by the design, configuration, and operation of the companion chatbot that assesses all of the following: (1) The likelihood of a covered harm occurring to users. (2) Differential risks across age groups and developmental stages. (3) Known vulnerabilities of children. (4) Empirical data from actual use. (5) Relevant academic research and regulatory guidance. (b) Take and document measures that reasonably mitigate any child safety risk identified in a risk assessment conducted pursuant to subdivision (a).
Any health care provider that uses an artificial intelligence system to make, or be a substantial factor in making, a consequential decision shall: (1) Monitor the usage of artificial intelligence systems to make, or be a substantial factor in making, consequential decisions; (2) Conduct regular performance evaluations of the artificial intelligence systems, including the assessment of: (A) Potential biases; (B) Risks to the safety and rights of patients, including the confidentiality of personal data; and (C) Mitigation strategies for any identified risks; (3) Implement procedures to address any deficiencies identified through the monitoring or performance evaluations, including the suspension or recalibration of any artificial intelligence system;
A deployer of a chatbot shall do all of the following: 1. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the chatbot may cause a user in a manner that prioritizes the safety and well-being of users over the deployer's interests.
A deployer of a public-facing chatbot shall do all of the following: a. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the public-facing chatbot may cause a user in a manner that takes commercially reasonable steps to protect the safety and well-being of users.
A deployer of a chatbot shall do all of the following: 1. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the chatbot may cause a user in a manner that prioritizes the safety and well-being of users over the deployer's interests.
(9) the performance, use, and outcomes of an artificial intelligence, algorithm, or other software tool are reviewed and revised, if necessary and at least on a quarterly basis, to maximize accuracy and reliability;
(d) A licensees shall do all of the following: (1) Demonstrate effectiveness through peer-reviewed, controlled trials with appropriate validation studies done on appropriate sample sizes with real-world performance data. (2) Demonstrate effectiveness in a comparative analysis to human expert performance. (3) Meet minimum domain benchmarks as established by the Department.
A licensee shall do all of the following: (1) Implement industry-standard encryption for data in transit and at rest, maintain detailed access logs, and conduct regular security audits no less than once every six months.
A licensee shall do all of the following: (1) Demonstrate effectiveness through peer-reviewed, controlled trials with appropriate validation studies done on appropriate sample sizes with real-world performance data. (2) Demonstrate effectiveness in a comparative analysis to human expert performance. (3) Meet minimum domain benchmarks as established by the Department.
All covered platforms shall utilize transport encryption for all messages between a user and a chatbot.
The Office of Information Technology shall: (1) establish minimum requirements for an artificial intelligence safety test for artificial intelligence technology sold, developed, deployed, used, or offered for sale in this State that is conducted by an artificial intelligence company pursuant to subsection c. of this section, which requirements shall include but not be limited to: (a) an analysis of potential cybersecurity threats and vulnerabilities; (b) an analysis of an artificial intelligence technology's data sources and potential sources of bias, incorrect or inaccurate information, or violations of State or federal criminal, copyright, patent, or trade secret laws; and (c) descriptions of possible remedies or defensive measures that can be taken by the artificial intelligence company to address all potential cybersecurity threats and vulnerabilities, potential sources of bias, incorrect or inaccurate information, or potential violations of State or federal criminal, copyright, patent, or trade secret laws identified during the conducting of the safety test
An artificial intelligence company shall annually subject all artificial intelligence technology sold, developed, deployed, used, or offered for sale in this State to a safety test that adheres to the requirements established pursuant to subsection b. of this section and submit a report to the Office of Information Technology containing: (1) a list of all artificial intelligence technologies tested; (2) a description of each safety test conducted, including the safety test's adherence to the requirements established pursuant to subsection b. of this section; (3) a list of all third parties used to conduct safety tests, if any; and (4) the results of each safety test administered.
1. New York residents have the right to be protected from unsafe or ineffective automated systems. These systems must be developed in collaboration with diverse communities, stakeholders, and domain experts to identify and address any potential concerns, risks, or impacts. 2. Automated systems shall undergo pre-deployment testing, risk identification and mitigation, and shall also be subjected to ongoing monitoring that demonstrates they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.
3. If an automated system fails to meet the requirements of this section, it shall not be deployed or, if already in use, shall be removed. No automated system shall be designed with the intent or a reasonably foreseeable possibility of endangering the safety of any New York resident or New York communities. 4. Automated systems shall be designed to proactively protect New York residents from harm stemming from unintended, yet foreseeable, uses or impacts.
6. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, shall be performed and the results made public whenever possible.
5. New York residents are entitled to protection from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse.
1. The secretary shall conduct periodic evaluations of the source code and outcomes associated with each high-risk advanced artificial intelligence system. These examinations shall determine whether the system is in compliance with this article. The timing and frequency of these reviews shall be determined at the secretary's discretion, taking into account the potential risk posed by the system, the complexity of the system, the frequency of updates and upgrades, the complexity of such updates and upgrades, and any previous issues of non-compliance. 2. Upon completion of the review, the secretary is empowered to make binding recommendations to the operator to ensure the system's functionality and outcomes are aligned with the principles in the advanced artificial intelligence ethical code of conduct pursuant to section five hundred twenty-nine of this article, restrictions on prohibited artificial intelligence systems pursuant to section five hundred thirty of this article, and limitations and procedures for source code modifications, updates, upgrades, and rewrites pursuant to section five hundred nineteen of this article. 3. Following receipt of the secretary's recommendations, the operator shall consult with the secretary to determine the feasibility of implementing the recommendations and the time frame in which such recommendations can be implemented to ensure full compliance with the secretary's recommendations. The operator shall provide a detailed plan outlining how the recommendations will be addressed, along with a timeline for their implementation. The detailed plan shall be binding on the operator; provided however that where an unexpected occurrence arises which causes changes to such plan, the operator shall be entitled to extend such timeline or alter such plans where such operator notifies the secretary in writing regarding the unexpected occurrence and, within such writing, sets forth amendments to the detailed plan and timeline. The secretary shall have thirty days to approve or reject such amendments. Where such amendments are rejected, the operator shall continue with their original plan and timeline. 4. The secretary shall monitor the operator's compliance with such recommendations and may impose fines and other penalties pursuant to the provisions of this article for non-compliance that the secretary shall deem just and proportionate to the violation.
(a) take reasonable measures to prevent and mitigate any harm identified by a pre-deployment evaluation described in section one hundred three or an impact assessment described in section one hundred four of this article; (b) take reasonable measures to ensure that an independent auditor has all necessary information to complete an accurate and effective pre-deployment evaluation described in section one hundred three or an impact assessment described in section one hundred four of this article;
(b) It shall be unlawful for a developer to knowingly offer or license a covered algorithm for any consequential action other than those evaluated in the pre-deployment evaluation described in section one hundred three of this article. (c) It shall be unlawful for a deployer to knowingly use a covered algorithm for any consequential action other than a use evaluated in the pre-deployment evaluation described in section one hundred three of this article, unless the deployer agrees to assume the responsibilities of a developer required by this article.
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol;
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: ... (e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
C. Deployers shall implement and maintain a Quality Assurance Program, as outlined in Section 4 of this act, to ensure the safe, effective, and compliant use of AI devices in patient care.
C. Deployers of an AI device shall conduct and document regular performance evaluations and risk assessments of the device. Such evaluations and assessments should be informed by invited feedback from qualified end-users and, when applicable, participation in national specialty society-administered AI assessment registries. Whenever AI device performance concerns are identified, deployers shall implement appropriate corrective actions to mitigate risk to patients.
D. Deployers shall have a diligent review and selection process for the deployed AI device.
F. Deployers shall continuously monitor the performance of all deployed AI devices, including assessing any impact on patient safety or the quality of patient care. G. In conducting performance monitoring described in subsection F of this section, deployers must participate in national specialty society-administered artificial intelligence assessment registries when feasible.
(5) The performance, use and outcomes of the artificial intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
(7) The performance, use and outcomes of the artificial intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
(7) The performance, use and outcomes of the artificial intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
(4) The procedures by which the supplier: (i) Conducts testing, prior to making the chatbot publicly available and regularly thereafter, to ensure that the output of the chatbot poses no greater risk to a consumer than that posed to an individual communicating with a human. (ii) Identifies reasonably foreseeable adverse outcomes to, and potentially harmful interactions with, consumers that could result from using the chatbot. (iii) Provides a mechanism for a consumer to report any potentially harmful interactions from the use of the chatbot. (iv) Implements protocols to assess and respond to risk of harm to consumers or other individuals. (v) Details actions taken to prevent or mitigate any adverse outcomes or potentially harmful interactions. (vi) Implements protocols to respond, as soon as practicable, to acute risks of physical harm. (vii) Reasonably ensures regular, objective reviews of safety, accuracy and efficacy, which may include internal or external audits. (viii) Provides consumers with instructions on the safe use of the chatbot. (ix) Prioritizes consumer mental health and safety over engagement metrics or profit. (x) Implements measures to prevent discriminatory treatment of consumers. (xi) Ensures compliance with the security and privacy provisions of 45 CFR Pts. 160 (relating to general administrative requirements) and 164 (relating to security and privacy), as if the supplier were a covered entity.
(5) The performance, use and outcomes of the artificial-intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
(7) The performance, use and outcomes of the artificial-intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
(7) The performance, use and outcomes of the artificial-intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
(7) The artificial-intelligence-based algorithms must not create foreseeable, material risks of harm to the patient.
(9) The artificial-intelligence-based algorithms must not create foreseeable, material risks of harm to the covered person.
(9) The artificial-intelligence-based algorithms must not create foreseeable, material risks of harm to the enrollee.
(C) In compliance with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40, a chatbot provider shall: (1) on a monthly basis: (a) evaluate its chatbot for potential risk of harm to users; and (b) make information about its chatbot publicly available on its website; and (2) mitigate any risk of harm to users.
(C) In compliance with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40, a chatbot provider shall: (1) on a monthly basis: (a) evaluate its chatbot for potential risk of harm to users; and (b) make information about its chatbot publicly available on its website; and (2) mitigate any risk of harm to users.
To be eligible for regulatory mitigation, a participant shall demonstrate to the office that: (a) the participant has the technical expertise and capability to responsibly develop and test the proposed artificial intelligence technology; (b) the participant has sufficient financial resources to meet obligations during testing; (c) the artificial intelligence technology provides potential substantial consumer benefits that may outweigh identified risks from mitigated enforcement of regulations; (d) the participant has an effective plan to monitor and minimize identified risks from testing; and (e) the scale, scope, and duration of proposed testing is appropriately limited based on risk assessments.
(a) No developer shall offer, sell, lease, give, or otherwise place in the stream of commerce: (1) an inherently dangerous artificial intelligence system, unless the developer has conducted a documented testing, evaluation, verification, and validation of that system at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology (NIST); or (2) an artificial intelligence system that creates reasonably foreseeable risks pursuant to section 4193f of this subchapter, unless the developer mitigates these risks to the extent possible, considers alternatives, and discloses vulnerabilities and mitigation tactics to a deployer.
(a) Each developer or deployer of any inherently dangerous artificial intelligence system that could be reasonably expected to impact consumers shall exercise reasonable care to avoid any reasonably foreseeable risk arising out of the development of, intentional and substantial modification to, or deployment of an artificial intelligence system that causes or is likely to cause: (1) the commission of a crime or unlawful act; (2) any unfair or deceptive treatment of or unlawful impact on an individual; (3) any physical, financial, relational, or reputational injury on an individual; (4) psychological injuries that would be highly offensive to a reasonable person; (5) any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns of a person, if the intrusion would be offensive to a reasonable person; (6) any violation to the intellectual property rights of persons under applicable State and federal laws; (7) discrimination on the basis of a person's or class of persons' actual or perceived race, color, ethnicity, sex, sexual orientation, gender identity, sex characteristics, religion, national origin, familial status, biometric information, or disability status; (8) distortion of a person's behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm; or (9) the exploitation of the vulnerabilities of a specific group of persons due to their age or physical or mental disability in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.
(c) Risk assessment. A chatbot provider shall on a monthly basis, according to metrics as set forth in rules adopted by the Attorney General pursuant to this subchapter, assess its chatbot for risks of harm to users and actively mitigate any risks of harm.