Statutory Text
1. Nothing in this article shall be construed to restrict a developer's, deployer's, or other person's ability to: (a) comply with federal, state or municipal law; (b) comply with a civil, criminal or regulatory inquiry, investigation, subpoena, or summons by a federal, state, municipal, or other governmental authority; (c) cooperate with a law enforcement agency concerning conduct or activity that the developer, deployer, or other person reasonably and in good faith believes may violate federal, state, or municipal law; (d) investigate, establish, exercise, prepare for, or defend a legal claim; (e) take immediate steps to protect an interest that is essential for the life or physical safety of a consumer or another individual; (f) (i) by any means other than facial recognition technology, prevent, detect, protect against, or respond to: (A) a security incident; (B) a malicious or deceptive activity; or (C) identity theft, fraud, harassment or any other illegal activity; (ii) investigate, report, or prosecute the persons responsible for any action described in subparagraph (i) of this paragraph; or (iii) preserve the integrity or security of systems; (g) engage in public or peer-reviewed scientific or statistical research in the public interest that: (i) adheres to all other applicable ethics and privacy laws; and (ii) is conducted in accordance with: (A) part forty-six of title forty-five of the code of federal regulations, as amended; or (B) relevant requirements established by the federal food and drug administration; (h) conduct research, testing, and development activities regarding an artificial intelligence decision system or model, other than testing conducted pursuant to real world conditions, before such artificial intelligence decision system or model is placed on the market, deployed, or put into service, as applicable; (i) effectuate a product recall; (j) identify and repair technical errors that impair existing or intended functionality; or (k) assist another developer, deployer, or person with any of the obligations imposed pursuant to this article. 2. The obligations imposed on developers, deployers, or other persons pursuant to this article shall not apply where compliance by the developer, deployer, or other person with the provisions of this article would violate an evidentiary privilege pursuant to state law. 3. Nothing in this article shall be construed to impose any obligation on a developer, deployer, or other person that adversely affects the rights or freedoms of any person, including, but not limited to, the rights of any person: (a) to freedom of speech or freedom of the press guaranteed in: (i) the first amendment to the United States constitution; and (ii) section eight of the New York state constitution; or (b) pursuant to section seventy-nine-h of the civil rights law. 4. Nothing in this article shall be construed to apply to any developer, deployer, or other person: (a) insofar as such developer, deployer or other person develops, deploys, puts into service, or intentionally and substantially modifies, as applicable, a high-risk artificial intelligence decision system: (i) that has been approved, authorized, certified, cleared, developed, or granted by: (A) a federal agency, including, but not limited to, the federal food and drug administration or the federal aviation administration, acting within the scope of such federal agency's authority; or (B) a regulated entity subject to supervision and regulation by the federal housing finance agency; or (ii) in compliance with standards that are: (A) established by: (I) any federal agency, including, but not limited to, the federal office of the national coordinator for health information technology; or (II) a regulated entity subject to supervision and regulation by the federal housing finance agency; and (B) substantially equivalent to, and at least as stringent as, the standards established pursuant to this article; (b) conducting research to support an application: (i) for approval or certification from any federal agency, including, but not limited to, the federal food and drug administration, the federal aviation administration, or the federal communications commission; or (ii) that is otherwise subject to review by any federal agency; (c) performing work pursuant to, or in connection with, a contract with the federal department of commerce, the federal department of defense, or the national aeronautics and space administration, unless such developer, deployer, or other person is performing such work on a high-risk artificial intelligence decision system that is used to make, or as a substantial factor in making, a decision concerning employment or housing; or (d) that is a covered entity, as defined by the health insurance portability and accountability act of 1996 and the regulations promulgated thereunder, as amended, and providing health care recommendations that: (i) are generated by an artificial intelligence decision system; (ii) require a health care provider to take action to implement such recommendations; and (iii) are not considered to be high risk. 5. Nothing in this article shall be construed to apply to any artificial intelligence decision system that is acquired by or for the federal government or any federal agency or department, including, but not limited to, the federal department of commerce, the federal department of defense, or the national aeronautics and space administration, unless such artificial intelligence decision system is a high-risk artificial intelligence decision system that is used to make, or as a substantial factor in making, a decision concerning employment or housing. 6. Any insurer, as defined by section five hundred one of the insurance law, or fraternal benefit society, as defined by section four thousand five hundred one of the insurance law, shall be deemed to be in full compliance with the provisions of this article if such insurer or fraternal benefit society has implemented and maintains a written artificial intelligence decision systems program in accordance with all requirements established by the superintendent of financial services. 7. (a) Any bank, out-of-state bank, New York credit union, federal credit union, or out-of-state credit union, or any affiliate or subsidiary thereof, shall be deemed to be in full compliance with the provisions of this article if such bank, out-of-state bank, New York credit union, federal credit union, out-of-state credit union, affiliate, or subsidiary is subject to examination by any state or federal prudential regulator pursuant to any published guidance or regulations that apply to the use of high-risk artificial intelligence decision systems, and such guidance or regulations: (i) impose requirements that are substantially equivalent to, and at least as stringent as, the requirements of this article; and (ii) at a minimum, require such bank, out-of-state bank, New York credit union, federal credit union, out-of-state credit union, affiliate, or subsidiary to: (A) regularly audit such bank's, out-of-state bank's, New York credit union's, federal credit union's, out-of-state credit union's, affiliate's, or subsidiary's use of high-risk artificial intelligence decision systems for compliance with state and federal anti-discrimination laws and regulations applicable to such bank, out-of-state bank, New York credit union, federal credit union, out-of-state credit union, affiliate, or subsidiary; and (B) mitigate any algorithmic discrimination caused by the use of a high-risk artificial intelligence decision system, or any risk of algorithmic discrimination that is reasonably foreseeable as a result of the use of a high-risk artificial intelligence decision system. 8. If a developer, deployer, or other person engages in any action under an exemption pursuant to subdivisions one, two, three, four, five, six, or seven of this section, the developer, deployer, or other person bears the burden of demonstrating that such action qualifies for such exemption.