Organizations developing or deploying AI must establish a formal AI governance program, maintain contemporaneous records of AI system design, testing, and deployment decisions, and designate a responsible individual or office for AI governance. Program establishment is not a one-time exercise — ongoing maintenance, recordkeeping, and accountability designation are continuing obligations.
A chatbot provider shall develop, implement and maintain a comprehensive data security program that contains administrative, technical and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program shall be written and made publicly available on the chatbot provider's website.
(a) An operator shall document all of the following with respect to any companion chatbot that the operator makes available in this state: (1) The existence of a graduated response system. (2) All credible crisis expressions detected by the companion chatbot. (3) The duration and conditions of a crisis interruption pause initiated by the companion chatbot.
(c) Preservation of Records. Any personnel or other employment records created or received by any employer or other covered entity dealing with any employment practice and affecting any employment benefit of any applicant or employee shall be preserved by the employer or other covered entity for a period of four years from the date of the making of the record or the date of the personnel action involved, whichever occurs later. This includes all applications, personnel records, membership records, employment referral records, selection criteria, automated-decision system data, and other records created or received by the employer or other covered entity dealing with any employment practice and affecting any employment benefit of any applicant or employee.
(a) Beginning on the date that is 180 days after the Attorney General adopts regulations pursuant to Section 22615, and annually thereafter, an operator shall submit to an independent audit assessing the operator's compliance with this chapter. (b) Within 90 days of completing an independent audit pursuant to subdivision (a), the auditor shall submit an AI child safety audit report to the Attorney General for any audited companion chatbot. (c) (1) Notwithstanding any other law, except as provided in paragraph (2), an AI child safety audit report submitted pursuant to this section is confidential. (2) The Attorney General may disclose specific information from an AI child safety audit report to any of the following: (A) A government agency or a public prosecutor in the state as necessary for enforcement purposes. (B) A qualified researcher conducting a study on child safety, subject to confidentiality agreements and data protection requirements set by the Attorney General. (C) An independent child safety organization or advocacy group for the purpose of developing safety standards or educational resources, subject to appropriate confidentiality protections.
(a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system. (b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following: (1) The use, or intended use, of the high-risk automated decision system. (2) The size, complexity, and resources of the deployer or developer. (3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system. (4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system.
(a) A covered deployer conducting business in this state shall have a duty to protect personal information held by the covered deployer as provided by this section. (b) A covered deployer whose high-risk artificial intelligence systems process personal information shall develop, implement, and maintain a comprehensive information security program that is written in one or more readily accessible parts and contains administrative, technical, and physical safeguards that are appropriate for all of the following: (1) The covered deployer's size, scope, and type of business. (2) The amount of resources available to the covered deployer. (3) The amount of data stored by the covered deployer. (4) The need for security and confidentiality of personal information stored by the covered deployer. (c) The comprehensive information security program required by subdivision (a) shall meet all of the following requirements: (1) The program shall incorporate safeguards that are consistent with the safeguards for the protection of personal information and information of a similar character under state or federal laws and regulations applicable to the covered deployer. (2) The program shall include the designation of one or more employees of the covered deployer to maintain the program. (10) The program shall require the regular review of the scope of the program's security measures that must occur subject to both of the following timeframes: (A) At least annually. (B) Whenever there is a material change in the covered deployer's business practices that may reasonably affect the security or integrity of records containing personal information.
(3) The program shall require the identification and assessment of reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of any electronic, paper, or other record containing personal information, and the establishment of a process for evaluating and improving, as necessary, the effectiveness of the current safeguards for limiting those risks, including by all of the following: (A) Requiring ongoing employee and contractor education and training, including education and training for temporary employees and contractors of the covered deployer, on the proper use of security procedures and protocols and the importance of personal information security. (B) Mandating employee compliance with policies and procedures established under the program. (C) Providing a means for detecting and preventing security system failures. (4) The program shall include security policies for the covered deployer's employees relating to the storage, access, and transportation of records containing personal information outside of the covered deployer's physical business premises. (5) The program shall provide disciplinary measures for violations of a policy or procedure established under the program. (6) The program shall include measures for preventing a terminated employee from accessing records containing personal information. (8) The program shall provide reasonable restrictions on physical access to records containing personal information, including by requiring the records containing the data to be stored in a locked facility, storage area, or container. (9) The program shall include regular monitoring to ensure that the program is operating in a manner reasonably calculated to prevent unauthorized access to or unauthorized use of personal information and, as necessary, upgrading information safeguards to limit the risk of unauthorized access to or unauthorized use of personal information. (11) The program shall require the documentation of responsive actions taken in connection with any incident involving a breach of security, including a mandatory postincident review of each event and the actions taken, if any, in response to that event to make changes in business practices relating to protection of personal information.
(7) The program shall provide policies for the supervision of third-party service providers that include both of the following: (A) Taking reasonable steps to select and retain third-party service providers that are capable of maintaining appropriate security measures to protect personal information consistent with applicable law. (B) Requiring third-party service providers by contract to implement and maintain appropriate security measures for personal information.
(12) The program shall, to the extent feasible, include all of the following procedures and protocols with respect to computer system security requirements or procedures and protocols providing a higher degree of security, for the protection of personal information: (A) The use of secure user authentication protocols that include all of the following features: (i) The control of user login credentials and other identifiers. (ii) The use of a reasonably secure method of assigning and selecting passwords or using unique identifier technologies, which may include biometrics or token devices. (iii) The control of data security passwords to ensure that the passwords are kept in a location and a format that do not compromise the security of the data the passwords protect. (iv) The restriction of access to only active users and active user accounts. (v) The blocking of access to user credentials or identification after multiple unsuccessful attempts to gain access. (B) The use of secure access control measures that include both of the following: (i) The restriction of access to records and files containing personal information to only employees or contractors who need access to that personal information to perform the job duties of the employees or contractors. (ii) The assignment of a unique identification and a password to each employee or contractor with access to a computer containing personal information, that may not be a vendor-supplied default password, or the use of another protocol reasonably designed to maintain the integrity of the security of the access controls to personal information. (C) The encryption of both of the following: (i) Transmitted records and files containing personal information that will travel across public networks. (ii) Data containing personal information that is transmitted wirelessly. (D) The use of reasonable monitoring of systems for unauthorized use of or access to personal information. (E) The encryption of all personal information stored on laptop computers or other portable devices. (F) For files containing personal information on a system that is connected to the internet, the use of reasonably current firewall protection and operating system security patches that are reasonably designed to maintain the integrity of the personal information. (G) The use of both of the following: (i) A reasonably current version of system security agent software that shall include malware protection and reasonably current patches and virus definitions. (ii) A version of a system security agent software that is supportable with current patches and virus definitions, and is set to receive the most current security updates on a regular basis.
A large frontier developer shall write, implement, comply with, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer's frontier models and describes how the large frontier developer approaches all of the following: (1) Incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework. (2) Defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds. (3) Applying mitigations to address the potential for catastrophic risks based on the results of assessments undertaken pursuant to paragraph (2). (4) Reviewing assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally. (5) Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks. (6) Revisiting and updating the frontier AI framework, including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures pursuant to subdivision (c). (7) Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties. (8) Identifying and responding to critical safety incidents. (9) Instituting internal governance practices to ensure implementation of these processes. (10) Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.
A large frontier developer shall review and, as appropriate, update its frontier AI framework at least once per year.
If a large frontier developer makes a material modification to its frontier AI framework, the large frontier developer shall clearly and conspicuously publish the modified frontier AI framework and a justification for that modification within 30 days.
(B) A large frontier developer shall not make a materially false or misleading statement about its implementation of, or compliance with, its frontier AI framework... (2) This subdivision does not apply to a statement that was made in good faith and was reasonable under the circumstances.
A frontier developer shall not make, adopt, enforce, or enter into a rule, regulation, policy, or contract that prevents a covered employee from disclosing, or retaliates against a covered employee for disclosing, information to the Attorney General, a federal authority, a person with authority over the covered employee, or another covered employee who has authority to investigate, discover, or correct the reported issue, if the covered employee has reasonable cause to believe that the information discloses either of the following: (1) The frontier developer's activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk. (2) The frontier developer has violated Chapter 25.1 (commencing with Section 22757.10) of Division 8 of the Business and Professions Code.
A frontier developer shall not enter into a contract that prevents a covered employee from making a disclosure protected under Section 1102.5.
A frontier developer shall provide a clear notice to all covered employees of their rights and responsibilities under this section, including by doing either of the following: (1) At all times posting and displaying within any workplace maintained by the frontier developer a notice to all covered employees of their rights under this section, ensuring that any new covered employee receives equivalent notice, and ensuring that any covered employee who works remotely periodically receives an equivalent notice. (2) At least once each year, providing written notice to each covered employee of the covered employee's rights under this section and ensuring that the notice is received and acknowledged by all of those covered employees.
A large frontier developer shall provide a reasonable internal process through which a covered employee may anonymously disclose information to the large frontier developer if the covered employee believes in good faith that the information indicates that the large frontier developer's activities present a specific and substantial danger to the public health or safety resulting from a catastrophic risk or that the large frontier developer violated Chapter 25.1 (commencing with Section 22757.10) of Division 8 of the Business and Professions Code, including a monthly update to the person who made the disclosure regarding the status of the large frontier developer's investigation of the disclosure and the actions taken by the large frontier developer in response to the disclosure. (2)(A) Except as provided in subparagraph (B), the disclosures and responses of the process required by this subdivision shall be shared with officers and directors of the large frontier developer at least once each quarter. (B) If a covered employee has alleged wrongdoing by an officer or director of the large frontier developer in a disclosure or response, subparagraph (A) shall not apply with respect to that officer or director.
(b) An employer shall maintain an updated list of all ADS currently in use.
(e) THE ARTIFICIAL INTELLIGENCE SYSTEM PRODUCES AND RETAINS DOCUMENTATION, AUDIT LOGS, AND MODEL-GOVERNANCE RECORDS IN ORDER TO DEMONSTRATE COMPLIANCE WITH THIS SECTION AND SECTION 10-3-1104.9;
(2) (a) A DEVELOPER SHALL PROVIDE TO EACH DEPLOYER OF A COVERED ADMT DEVELOPED BY THE DEVELOPER A NOTICE OF MATERIAL UPDATES, INTENTIONAL AND SUBSTANTIAL MODIFICATIONS, AND CHANGES TO THE INTENDED USE OF, LIMITATIONS FOR, OR RISK MITIGATION FOR THE COVERED ADMT WITHIN A REASONABLE TIME. (b) A DEVELOPER MAY USE PUBLIC RELEASE NOTES CONTAINING THE INFORMATION REQUIRED BY SUBSECTION (2)(a) OF THIS SECTION TO COMPLY WITH THIS SUBSECTION (2) IF THE DEVELOPER PROVIDES DIRECT NOTICE OF THE PUBLIC RELEASE TO EACH DEPLOYER OF THE COVERED ADMT.
A DEVELOPER SHALL RETAIN, FOR NOT LESS THAN THREE YEARS AFTER THE CREATION OF A RECORD REQUIRED OR CREATED UNDER THIS SECTION OR FOR A LONGER PERIOD IF REQUIRED BY APPLICABLE STATE OR FEDERAL LAW, RECORDS REASONABLY NECESSARY TO DEMONSTRATE COMPLIANCE WITH THIS SECTION. RECORDS INCLUDE SYSTEM VERSION IDENTIFIERS, CHANGELOGS, AND DOCUMENTATION AND NOTICES OF MATERIAL UPDATES PROVIDED TO DEPLOYERS PURSUANT TO SUBSECTION (2) OF THIS SECTION.
A DEPLOYER SHALL RETAIN, FOR NOT LESS THAN THREE YEARS AFTER THE DATE OF A CONSEQUENTIAL DECISION OR FOR A LONGER PERIOD IF REQUIRED BY APPLICABLE STATE OR FEDERAL LAW, RECORDS REASONABLY NECESSARY TO DEMONSTRATE COMPLIANCE WITH THIS PART 17. RECORDS MAY INCLUDE, AS APPLICABLE, COVERED ADMT VERSION IDENTIFIERS, CHANGELOGS, AND DOCUMENTATION OF MATERIAL MITIGATION CHANGES.
(3) A DEVELOPER IS SUBJECT TO THE DISCLOSURE REQUIREMENTS DESCRIBED IN SUBSECTIONS (1) AND (2) OF THIS SECTION ONLY FOR A DEPLOYER'S USE OF A COVERED ADMT WHERE THE ADMT WAS MARKETED, ADVERTISED, CONFIGURED, CONTRACTED, SOLD, OR LICENSED TO BE USED TO MATERIALLY INFLUENCE A CONSEQUENTIAL DECISION. (5) THIS SECTION APPLIES WHEN A DEVELOPER CREATES A COVERED ADMT THAT IS INTENDED, DOCUMENTED, MARKETED, ADVERTISED, CONFIGURED, OR CONTRACTED TO BE USED TO MAKE CONSEQUENTIAL DECISIONS OR WHEN THE DEVELOPER BECOMES AWARE THAT THE COVERED ADMT IS BEING USED TO MAKE CONSEQUENTIAL DECISIONS IN A MANNER CONSISTENT WITH THE INTENDED AND CONTRACTED USES.
(2) (a) On and after June 30, 2026, and except as provided in subsection (6) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (2) must be reasonable considering:
(e) Each deployer shall maintain records relating to bias audits required pursuant to subsection (a) of this section for a period of not less than five years and shall make such records available to the Labor Commissioner upon request.
Except as provided in Code Section 10-16-6, a deployer of an automated decision system shall implement a risk management policy and program to govern the deployer's deployment of the automated decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of an automated decision system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection shall take into consideration: (1) Either: (A) The guidance and standards set forth in the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology of the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (B) Any risk management framework for artificial intelligence systems that the Attorney General, in the Attorney General's discretion, may designate; (2) The size and complexity of the deployer; (3) The nature and scope of the automated decision systems deployed by the deployer, including the intended uses of the automated decision systems; and (4) The sensitivity and volume of data processed in connection with the automated decision systems deployed by the deployer. A risk management policy and program implemented pursuant to this Code section may cover multiple automated decision systems deployed by the deployer.
Each deployer shall establish and adhere to: (1) Written standards, policies, procedures, and protocols for the acquisition, use of, or reliance on automated decision systems developed by third-party developers, including reasonable contractual controls ensuring that the developer statements and summaries described in subsection (b) of Code Section 10-16-2 include all information necessary for the deployer to fulfill its obligations under this Code section; (2) Procedures for reporting any incorrect information or evidence of algorithmic discrimination to a developer for further investigation and mitigation, as necessary; and (3) Procedures to remediate and eliminate incorrect information from its automated decision systems that the deployer has identified or has been reported to a developer.
(4) Maintain: (A) An updated inventory of the artificial intelligence systems; (B) Documentation on the system design, intended use, and training data of the artificial intelligence systems; (C) Record of the monitoring, performance evaluations, and oversight activities; and (D) Documentation of findings and actions taken to address any deficiencies identified through the monitoring or performance evaluations.
5. An employer shall maintain an updated list of all automated decision systems currently in use by the employer to facilitate implementation of this section.
(b) A health insurance issuer shall ensure that its health insurance coverage is administered in conformity with this Act. The health insurance issuer's AI systems program shall include policies and procedures to ensure such conformity by all employees, directors, trustees, agents, representatives, and persons directly or indirectly contracted to administer the health insurance coverage. The health insurance issuer shall be responsible for any noncompliance under this Act with respect to its health insurance coverage. Nothing in this Section relieves any other person from liability for failure to comply with the Department's investigations or market conduct actions related to a health insurance issuer's compliance with this Act.
To address the concerns detailed in the findings in Section 5 of this Act and to ensure that negative impacts of AI system use are prevented, the Department of Innovation and Technology shall adopt rules as may be necessary to ensure that businesses using AI systems are compliant with the 5 principles of AI governance as follows: (1) Safety: Ensuring systems operate without causing harm to individuals. (2) Transparency: Providing clear and understandable explanations of how systems work and make decisions. (3) Accountability: Identifying and holding individuals or companies responsible for the system's performance and outcomes. (4) Fairness: Preventing and mitigating bias to ensure equitable treatment for all individuals. (5) Contestability: Allowing individuals to challenge and seek redress for decisions made by the system.
"AI systems program" means a written program for the responsible use of AI systems that makes or supports decisions related to regulated insurance practices to be developed, implemented, and maintained by all insurers authorized to do business in the State.
(a) A deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to map, measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination associated with the use or intended use of an automated decision tool. The safeguards required by this subsection shall be appropriate to all of the following: (1) the use or intended use of the automated decision tool; (2) the deployer's role as a deployer; (3) the size, complexity, and resources of the deployer; (4) the nature, context, and scope of the activities of the deployer in connection with the automated decision tool; and (5) the technical feasibility and cost of available tools, assessments, and other means used by a deployer to map, measure, manage, and govern the risks associated with an automated decision tool. (b) The governance program required by this Section shall be designed to do all of the following: (1) identify and implement safeguards to address reasonably foreseeable risks of algorithmic discrimination resulting from the use or intended use of an automated decision tool; (2) if established by a deployer, provide for the performance of impact assessments as required by Section 10; (3) conduct an annual and comprehensive review of policies, practices, and procedures to ensure compliance with this Act; (4) maintain for 2 years after completion the results of an impact assessment; and (5) evaluate and make reasonable adjustments to administrative and technical safeguards in light of material changes in technology, the risks associated with the automated decision tool, the state of technical standards, and changes in business arrangements or operations of the deployer. (c) A deployer shall designate at least one employee to be responsible for overseeing and maintaining the governance program and compliance with this Act. An employee designated under this subsection shall have the authority to assert to the employee's employer a good faith belief that the design, production, or use of an automated decision tool fails to comply with the requirements of this Act. An employer of an employee designated under this subsection shall conduct a prompt and complete assessment of any compliance issue raised by that employee. (d) This Section does not apply to a deployer with fewer than 25 employees unless, as of the end of the prior calendar year, the deployer deployed an automated decision tool that impacted more than 999 people per year.
(a) At least once every 2 years, an operator shall obtain an independent, third-party audit to assess the operator's compliance with this Act. The operator shall make publicly available on its website a high-level summary of the audit's findings, excluding confidential or proprietary information.
(1) The Commonwealth Office of Technology shall create an Artificial Intelligence Governance Committee to govern the use of artificial intelligence systems by state departments, state agencies, and state administrative bodies by: (a) Developing policy standards and guiding principles to mitigate risks and protect data and privacy of Kentucky citizens and businesses that adhere to the latest version of Standard ISO/IEC 42001 of the International Organization for Standardization; (b) Establishing technology standards to provide protocols and requirements for the use of generative artificial intelligence and high-risk artificial intelligence systems;
(2) The Artificial Intelligence Governance Committee shall develop policies and procedures to ensure that any department, program, cabinet, agency, or administrative body that utilizes and accesses the Commonwealth's information technology and technology infrastructure shall: (a) Verify the use and development of generative artificial intelligence systems and high-risk artificial intelligence systems; and (b) Act in compliance with responsible, ethical, and transparent procedures to implement the use of artificial intelligence technologies by: 1. Ensuring artificial intelligence models have comprehensive and complete documentation that is available for review and inspection; 2. Requiring review and intervention by humans dependent on the use case and potential risk for all outcomes from generative and high-risk artificial intelligence systems; and 3. Ensuring the use of generative artificial intelligence and high-risk artificial intelligence systems are resilient, accountable, and explainable.
(7) The Commonwealth Office of Technology shall establish policies to encompass legal and ethical frameworks to ensure that any artificial intelligence systems shall align with existing laws, administrative regulations, and guidelines, which shall be updated at least annually to maintain compliance as technology and industry best practices evolve.
(8) (a) Operating standards for utilization of high-risk artificial intelligence systems shall prohibit the use of a high-risk artificial intelligence system to render a consequential decision without the design and implementation of a risk management policy and program for high-risk artificial intelligence systems. The risk management policy shall: 1. Specify principles, process, and personnel that shall be utilized to maintain the risk management program; and 2. Identify, mitigate, and document any bias or potential bias that is a potential consequence of use in making a consequential decision. (b) Each risk management policy designed and implemented shall at a minimum adhere to the latest version of Standard ISO/IEC 42001 of the International Organization for Standardization, or another national or internationally recognized risk management framework for artificial intelligence systems, and consider the: 1. Size and complexity of the deployer; 2. Nature, scope, and intended use of the high-risk artificial intelligence system and its deployer; and 3. Sensitivity and volume of data processed.
(q) Establishing, publishing, maintaining, and implementing comprehensive policy standards and procedures for the responsible, ethical, and transparent use of generative artificial intelligence systems and high-risk artificial intelligence systems by departments, agencies, and administrative bodies, including but not limited to policy standards and procedures that: 1. Govern their procurement, implementation, and ongoing assessment; 2. Address and provide resources for security of data and privacy; and 3. Create guidelines for acceptable use policies for integrating high-risk artificial intelligence systems;
B. An employer shall maintain an updated list of all ADS currently in use.
(a) Risk Management Policy: Deployers of high-risk AI systems must implement and maintain a risk management program that: (1) Identifies and mitigates known or foreseeable risks of algorithmic discrimination; (2) Aligns with industry standards, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework.
(b) (1) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (b) must be reasonable considering: (i) (A) the guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (B) any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer. (2) a risk management policy and program implemented pursuant to subsection (b)(1) of this section may cover multiple high-risk artificial intelligence systems deployed by the deployer.
(c) An employer shall establish, maintain, and preserve for three years contemporaneous, true, and accurate records of data collected via an electronic monitoring tool to ensure compliance with employee or commissioner requests for data. The employer shall destroy any employee information collected via an electronic monitoring tool no later than thirty-seven months after collection unless the employee has provided written and informed consent to the retention of their data by the employer. An employer shall establish, implement and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of employee data appropriate to the volume and nature of the employee data at issue. An employee shall have the right to request corrections to erroneous employee data.
(c) An employer or its vendor shall retain all documentation pertaining to the design, development, use, and data of an automated employment decision tool that may be necessary to conduct an impact assessment. To the extent held by a vendor, the employer shall be granted a license to access this documentation and share this documentation with a labor organization to the extent required by federal or state law, or to the extent required by a court or agency in connection with employment or labor litigation. This includes but is not limited to the source of the data used to develop the tool, the technical specifications of the tool, individuals involved in the development of the tool, and historical use data for the tool. Such documentation must include a historical record of versions of the tool, such that an employer shall be able to attest in the event of litigation disputing an employment decision, the nature and specifications of the tool as it was used at the time of that employment decision. Such documentation shall be stored in accordance with such record-keeping, data retention, and security requirements as the commissioner may specify, and in such a manner as to be legible and accessible to the party conducting an impact assessment.
(d) Record and retain for 5 years any specific tests used and results obtained as a part of an assessment of critical risk with sufficient detail for qualified third parties to replicate the testing.
(3) If a large developer publishes a document in accordance with the requirements of this act, the large developer shall publish the information on a conspicuous page on the large developer's website. The large developer may redact the document as reasonably necessary to protect the large developer's trade secrets, public safety, or national security, or to comply with applicable law. An auditor required to perform an audit and produce a report under section 9 may redact information from the report using the same procedure described in this subsection before the publication of that report under section 9(3). (4) If a large developer or auditor makes a redaction under subsection (3), the large developer or auditor shall do both of the following: (a) Retain an unredacted version of the document for not less than 5 years and provide the attorney general with the ability to inspect the unredacted document on request. (b) Describe the character and justification of the redactions in the published version of the document.
(1) Beginning on January 1, 2026, not less than once per year, a large developer shall retain a reputable third-party auditor to produce a report that assesses all of the following: (a) If the large developer has complied with the large developer's safety and security protocol and any instances of noncompliance. (b) Any instance where the large developer's safety and security protocol was not stated clearly enough to determine if the large developer has complied with the safety and security protocol. (c) Any instance that the auditor believes the large developer violated section 7(2), (3), or (4). (2) A large developer shall grant the auditor access to all materials produced to comply with this act and any other materials reasonably necessary to perform the assessment under subsection (1). (3) Not more than 90 days after the completion of the auditor's report under subsection (1), a large developer shall conspicuously publish that report. (4) In conducting an audit under this section, an auditor shall employ or contract 1 or more individuals with expertise in corporate compliance and 1 or more individuals with technical expertise in the safety of foundation models.
Sec. 7. (1) An employer that collects a covered individual's data shall retain the data for not more than 3 years after the date on which the purpose for using the electronic monitoring tool or automated decisions tool is achieved, unless otherwise specified by a collective bargaining agreement. If the employer does not use any specific data of a covered individual, the employer must delete that data immediately. (2) An employer shall not sell or license a covered individual's data, including, but not limited to, data that is deidentified or aggregated. (3) An employer shall not share data collected under section 4 or 5 with this state or a local unit of government unless otherwise necessary to do any of the following: (a) Provide information to the department. (b) Comply with the requirements of federal, state, or local law. (c) Comply with a court-issued subpoena, warrant, or order.
(4) An employer shall retain all documentation pertaining to the design, development, use, and data of an electronic monitoring tool or automated decisions tool that may be necessary to conduct an impact assessment. The documentation includes, but is not limited to, the source of the data used to develop the tool, the technical specifications of the tool, individuals involved in the development of the tool, historical use data for the tool, and a historical record of the versions of the tool the employer uses. (5) A service provider that contracts with an employer to provide electronic monitoring or automated decisions shall allow the employer access to the documentation described in subsection (4). (6) An employer shall share the documentation described in subsection (4) with a labor organization as required under law or as required by a court or agency in connection with any employment or labor litigation to which the employer is a party. (7) The documentation described in subsection (4) must be stored in manner as prescribed by the director. The director shall prescribe the manner so that the documentation is legible and accessible to the party that conducts an impact assessment of the tool.
Subdivision 1. Data records. (a) Employers must maintain records of worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer for 36 months after the data's most recent collection, production, or use to ensure compliance with requests for data from workers or the commissioner of labor and industry. (b) Employers must destroy any worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer no later than 37 months after its most recent collection, production, or use, unless the worker has provided written and informed consent to the retention of the worker's data by the employer. (c) Employers must protect the confidentiality, integrity, and accessibility of worker data using data security practices consistent with data and cyber privacy laws and appropriate to the volume and nature of the worker data collected.
(a) A developer must (1) conduct an annual review of the safety and security protocol required under this section to account for changes to the capabilities of the artificial intelligence model and industry best practices; and (2) modify the safety and security protocol. (b) If a material modification is made to the safety and security protocol, the developer must publish the safety and security protocol in the same manner required under subdivision 1, clause (3).
Before deploying an artificial intelligence model, a developer must: (5) record and retain information on the specific tests and test results used in any assessment of the artificial intelligence model required under this section or by the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure for the entire period of time an artificial intelligence model is deployed, plus five years;
(a) A developer must (1) conduct an annual review of the safety and security protocol required under this section to account for changes to the capabilities of the artificial intelligence model and industry best practices; and (2) modify the safety and security protocol. (b) If a material modification is made to the safety and security protocol, the developer must publish the safety and security protocol in the same manner required under subdivision 1, clause (3).
Subdivision 1. Data records. (a) Employers must maintain records of worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer for 36 months after the data's most recent collection, production, or use to ensure compliance with requests for data from workers or the commissioner of labor and industry. (b) Employers must destroy any worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer no later than 37 months after its most recent collection, production, or use, unless the worker has provided written and informed consent to the retention of the worker's data by the employer. (c) Employers must protect the confidentiality, integrity, and accessibility of worker data using data security practices consistent with data and cyber privacy laws and appropriate to the volume and nature of the worker data collected.
1. Any private entity in possession of biometric identifiers or biometric information shall develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within one year of the individual's last interaction with the private entity, whichever occurs first. Absent a valid warrant or subpoena issued by a court of competent jurisdiction, a private entity in possession of biometric identifiers or biometric information shall comply with its established retention schedule and destruction guidelines.
(a) A licensee shall maintain professional liability insurance in an amount not less than the amount per occurrence required by the Department. (b) A licensee shall do all of the following: (1) Implement industry-standard encryption for data in transit and at rest, maintain detailed access logs, and conduct regular security audits no less than once every six (6) months.
(e) A licensee shall conduct regular inspections and perform an annual third-party audit. Results of all inspections and audits must be made available to the Department.
A licensee shall conduct regular inspections and perform an annual third-party audit. Results of all inspections and audits must be made available to the Department.
Every person who is a manufacturer or importer of a licensed chatbot under this Chapter shall establish andmaintain such records, and make such reports to the Director, as the Director may by regulation reasonably require to assure the safety and effectiveness of such devices.
A licensee shall maintain professional liability insurance in an amount not less than the amount per occurrence required by the Department.
(6)(a) When a large frontier developer or large chatbot provider publishes documents to comply with this section, the large frontier developer or large chatbot provider may make redactions to those documents that are necessary to protect the large frontier developer's trade secrets, the large frontier developer's or large chatbot provider's cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. (b) If a large frontier developer or large chatbot provider redacts information in a document pursuant to subdivision (6)(a) of this section, the large frontier developer or large chatbot provider shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted information for five years.
(2)(a) Except as otherwise provided in subsection (6) of this section, on and after February 1, 2026, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. High-risk artificial intelligence systems that are in conformity with the guidance and standards set forth in the following as of January 1, 2025, shall be presumed to be in conformity with this section: (i) The Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology; or (ii) The standard ISO/IEC 42001 of the International Organization for Standardization. (b) Any risk management policy and program implemented pursuant to subdivision (a) of this subsection may cover multiple high-risk artificial intelligence systems deployed by the deployer.
b. An employer or public entity shall make, keep, and preserve, for not less than three years, true and accurate records, including complete records of data and information about an about an employee or applicant, or service beneficiary, or applicant for employment collected by an EMT or other surveillance and all data and information used by an AEDS for outputs concerning the employee, service beneficiary, or applicant, and all performance evaluations, validation results and impact assessments. Any data or information for which an applicant has exercised their right to have destroyed pursuant to subsection a. of this section shall be exempt from the record retention requirements of this subsection once the records are destroyed. The employer or public entity shall destroy the data and information no later than 37 months after collection unless the employee, service beneficiary, or applicant has provided uncoerced written consent for the employer or public entity to retain them.
(a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer of a high-risk artificial intelligence decision system shall implement and maintain a risk management policy and program to govern such deployer's deployment of the high-risk artificial intelligence decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer shall use to identify, document, and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy shall be the product of an iterative process, the risk management program shall be an iterative process and both the risk management policy and program shall be planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence decision system. Each risk management policy and program implemented and maintained pursuant to this subdivision shall be reasonable, considering: (i) the guidance and standards set forth in the latest version of: (A) the "Artificial Intelligence Risk Management Framework" published by the national institute of standards and technology; (B) ISO or IEC 42001 of the international organization for standardization; or (C) a nationally or internationally recognized risk management framework for artificial intelligence decision systems, other than the guidance and standards specified in clauses (A) and (B) of this subparagraph, that imposes requirements that are substantially equivalent to, and at least as stringent as, the requirements established pursuant to this section for risk management policies and programs; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence decision systems deployed by the deployer, including, but not limited to, the intended uses of such high-risk artificial intelligence decision systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence decision systems deployed by the deployer. (b) A risk management policy and program implemented and maintained pursuant to paragraph (a) of this subdivision may cover multiple high-risk artificial intelligence decision systems deployed by the deployer.
Beginning on January first, two thousand twenty-seven, each developer of a general-purpose artificial intelligence model shall, except as provided in subdivision two of this section: (a) create and maintain technical documentation for the general-purpose artificial intelligence model, which shall: (i) include: (A) the training and testing processes for such general-purpose artificial intelligence model; and (B) the results of an evaluation of such general-purpose artificial intelligence model performed to determine whether such general-purpose artificial intelligence model is in compliance with the provisions of this article; (ii) include, as appropriate, considering the size and risk profile of such general-purpose artificial intelligence model, at least: (A) the tasks such general-purpose artificial intelligence model is intended to perform; (B) the type and nature of artificial intelligence decision systems in which such general-purpose artificial intelligence model is intended to be integrated; (C) acceptable use policies for such general-purpose artificial intelligence model; (D) the date such general-purpose artificial intelligence model is released; (E) the methods by which such general-purpose artificial intelligence model is distributed; and (F) the modality and format of inputs and outputs for such general-purpose artificial intelligence model; and (iii) be reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such technical documentation;
(a) The provisions of paragraph (a) and subparagraph (iii) of paragraph (b) of subdivision one of this section shall not apply to a developer that develops, or intentionally and substantially modifies, a general-purpose artificial intelligence model on or after January first, two thousand twenty-seven, if: (i) (A) the developer releases such general-purpose artificial intelligence model under a free and open-source license that allows for: (I) access to, and modification, distribution, and usage of, such general-purpose artificial intelligence model; and (II) the parameters of such general-purpose artificial intelligence model to be made publicly available pursuant to clause (B) of this subparagraph; and (B) unless such general-purpose artificial intelligence model is deployed as a high-risk artificial intelligence decision system, the parameters of such general-purpose artificial intelligence model, including, but not limited to, the weights and information concerning the model architecture and model usage for such general-purpose artificial intelligence model, are made publicly available; or (ii) the general-purpose artificial intelligence model is: (A) not offered for sale in the market; (B) not intended to interact with consumers; and (C) solely utilized: (I) for an entity's internal purposes; or (II) pursuant to an agreement between multiple entities for such entities' internal purposes. (b) The provisions of this section shall not apply to a developer that develops, or intentionally and substantially modifies, a general-purpose artificial intelligence model on or after January first, two thousand twenty-seven, if such general purpose artificial intelligence model performs tasks exclusively related to an entity's internal management affairs, including, but not limited to, ordering office supplies or processing payments. (c) A developer that takes any action under an exemption pursuant to paragraph (a) or (b) of this subdivision shall bear the burden of demonstrating that such action qualifies for such exemption. (d) A developer that is exempt pursuant to subparagraph (ii) of paragraph (a) of this subdivision shall establish and maintain an artificial intelligence risk management framework, which shall: (i) be the product of an iterative process and ongoing efforts; and (ii) include, at a minimum: (A) an internal governance function; (B) a map function that shall establish the context to frame risks; (C) a risk management function; and (D) a function to measure identified risks by assessing, analyzing and tracking such risks.
1. Every operator of a licensed high-risk advanced artificial intelligence system or systems shall establish an ethics and risk management board composed of no less than five individuals who shall have the responsibility to assess the ethical implications of all possible use cases of the system, whether such use cases are intended or unintended, and whether likely or unlikely to be used, and the current operational outcomes of the system. Such operator, other than an operator who is a natural person, operating more than one high-risk advanced artificial intelligence system with a supplemental license shall not be required to have more than one ethics and risk management board for each system. 2. No member of an ethics and risk management board shall be a member, officer, or director within the operator's entity. No member shall be required to be employed by the operator. 3. Such board shall adopt rules governing its decision-making processes, duties and responsibilities. Such rules shall not conflict with the provisions of this article.
Every time a licensee's system operates it shall automatically generate a log. Standards related to the specific types of events that are required to be logged, the format in which logs must be kept, the individuals or entities permitted to access logs and the conditions governing such access, the encryption and cybersecurity protocols to be applied to logs, the procedures for both the preservation and disposal of logs, and any other actions pertinent to log management shall conform to the standards set by the secretary. Such logs shall be preserved for a period of ten years from the date they are generated and shall be subject to inspection under section five hundred twenty-six of this article.
1. Every operator shall maintain such books, records, source code, and logs as the secretary shall require provided however that every operator shall, at least, maintain a copy of all logs generated from the system as well as a backup of every version of the system which shall be stored in a safe manner as prescribed by the secretary. 2. By a date to be set by the secretary, each operator shall annually file a report with the secretary giving such information as the secretary may require concerning the business and operations during the preceding calendar year of the operator within the state under the authority of this article. Such report shall be subscribed and affirmed as true by the operator under the penalties of perjury and be in the form prescribed by the secretary. In addition to such annual reports, the secretary may require of operators such additional regular or special reports as the secretary may deem necessary to the proper supervision of operators under this article. Such additional reports shall be in the form prescribed by the secretary and shall be subscribed and affirmed as true under the penalties of perjury.
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
(a) Beginning on the effective date of this article, or ninety days after a developer first qualifies as a large developer, whichever is later, a large developer shall annually retain a third party to perform an independent audit of compliance with the requirements of this section. Such third party shall conduct audits consistent with best practices. (b) The third party shall be granted access to unredacted materials as necessary to comply with the third party's obligations under this subdivision. (c) The third party shall produce a report including all of the following: (i) A detailed assessment of the large developer's steps to comply with the requirements of this section; (ii) If applicable, any identified instances of noncompliance with the requirements of this section, and any recommendations for how the developer can improve its policies and processes for ensuring compliance with the requirements of this section; (iii) A detailed assessment of the large developer's internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the large developer, its employees, and its contractors; and (iv) The signature of the lead auditor certifying the results of the audit. (d) The large developer shall retain an unredacted copy of the report for as long as a frontier model is deployed plus five years. (e) (i) The large developer shall conspicuously publish a copy of the third party's report with appropriate redactions and transmit a copy of such redacted report to the division of homeland security and emergency services. (ii) The large developer shall grant the division of homeland security and emergency services or the attorney general access to the third party's report, with redactions only to the extent required by federal law, upon request.
Every person, firm, partnership, association or corporation doing business or offering products to consumers in New York state shall develop a responsible capability scaling policy for the use and development of artificial intelligence by such entity.
1. Each developer or deployer of high-risk AI systems shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of such high-risk AI system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under subdivision one of section eighty-six of this article. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this section shall be reasonable considering: (a) The guidance and standards set forth in: (i) version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States department of commerce, or (ii) another substantially equivalent framework selected at the discretion of the attorney general, if such framework was designed to manage risks associated with AI systems, is nationally or internationally recognized and consensus-driven, and is at least as stringent as version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology; (b) The size and complexity of the developer or deployer; (c) The nature, scope, and intended uses of the high-risk AI system developed or deployed; and (d) The sensitivity and volume of data processed in connection with the high-risk AI system. 2. A risk management policy and program implemented pursuant to subdivision one of this section may cover multiple high-risk AI systems developed by the same developer or deployed by the same deployer if sufficient.
Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years.
Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model required by this section or the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure.
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any material modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
"Safety and security protocol" means documented technical and organizational protocols that: ... (e) Designate senior personnel to be responsible for ensuring compliance.
A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced pursuant to this section.
1. Each developer or deployer of high-risk AI systems shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of such high-risk AI system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under subdivision one of section eighty-six of this article. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this section shall be reasonable considering: (a) The guidance and standards set forth in: (i) version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States department of commerce, or (ii) another substantially equivalent framework selected at the discretion of the attorney general, if such framework was designed to manage risks associated with AI systems, is nationally or internationally recognized and consensus-driven, and is at least as stringent as version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology; (b) The size and complexity of the developer or deployer; (c) The nature, scope, and intended uses of the high-risk AI system developed or deployed; and (d) The sensitivity and volume of data processed in connection with the high-risk AI system. 2. A risk management policy and program implemented pursuant to subdivision one of this section may cover multiple high-risk AI systems developed by the same developer or deployed by the same deployer if sufficient.
2. (a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer of a high-risk artificial intelligence decision system shall implement and maintain a risk management policy and program to govern such deployer's deployment of the high-risk artificial intelligence decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer shall use to identify, document, and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy shall be the product of an iterative process, the risk management program shall be an iterative process and both the risk management policy and program shall be planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence decision system. Each risk management policy and program implemented and maintained pursuant to this subdivision shall be reasonable, considering: (i) the guidance and standards set forth in the latest version of: (A) the "Artificial Intelligence Risk Management Framework" published by the national institute of standards and technology; (B) ISO or IEC 42001 of the international organization for standardization; or (C) a nationally or internationally recognized risk management framework for artificial intelligence decision systems, other than the guidance and standards specified in clauses (A) and (B) of this subparagraph, that imposes requirements that are substantially equivalent to, and at least as stringent as, the requirements established pursuant to this section for risk management policies and programs; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence decision systems deployed by the deployer, including, but not limited to, the intended uses of such high-risk artificial intelligence decision systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence decision systems deployed by the deployer. (b) A risk management policy and program implemented and maintained pursuant to paragraph (a) of this subdivision may cover multiple high-risk artificial intelligence decision systems deployed by the deployer.
1. Beginning on January first, two thousand twenty-seven, each developer of a general-purpose artificial intelligence model shall, except as provided in subdivision two of this section: (a) create and maintain technical documentation for the general-purpose artificial intelligence model, which shall: (i) include: (A) the training and testing processes for such general-purpose artificial intelligence model; and (B) the results of an evaluation of such general-purpose artificial intelligence model performed to determine whether such general-purpose artificial intelligence model is in compliance with the provisions of this article; (ii) include, as appropriate, considering the size and risk profile of such general-purpose artificial intelligence model, at least: (A) the tasks such general-purpose artificial intelligence model is intended to perform; (B) the type and nature of artificial intelligence decision systems in which such general-purpose artificial intelligence model is intended to be integrated; (C) acceptable use policies for such general-purpose artificial intelligence model; (D) the date such general-purpose artificial intelligence model is released; (E) the methods by which such general-purpose artificial intelligence model is distributed; and (F) the modality and format of inputs and outputs for such general-purpose artificial intelligence model; and (iii) be reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such technical documentation; and (b) create, implement, maintain and make available to persons that intend to integrate such general-purpose artificial intelligence model into such persons' artificial intelligence decision systems documentation and information that: (i) enables such persons to: (A) understand the capabilities and limitations of such general-purpose artificial intelligence model; and (B) comply with such persons' obligations pursuant to this article; (ii) discloses, at a minimum: (A) the technical means required for such general-purpose artificial intelligence model to be integrated into such persons' artificial intelligence decision systems; (B) the information listed in subparagraph (ii) of paragraph (a) of this subdivision; and (iii) except as provided in subdivision two of this section, is reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such documentation and information.
(d) A developer that is exempt pursuant to subparagraph (ii) of paragraph (a) of this subdivision shall establish and maintain an artificial intelligence risk management framework, which shall: (i) be the product of an iterative process and ongoing efforts; and (ii) include, at a minimum: (A) an internal governance function; (B) a map function that shall establish the context to frame risks; (C) a risk management function; and (D) a function to measure identified risks by assessing, analyzing and tracking such risks.
(d) Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure;
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
(a) Beginning on the effective date of this article, or ninety days after a developer first qualifies as a large developer, whichever is later, a large developer shall annually retain a third party to perform an independent audit of compliance with the requirements of this section. Such third party shall conduct audits consistent with best practices. (b) The third party shall be granted access to unredacted materials as necessary to comply with the third party's obligations under this subdivision. (c) The third party shall produce a report including all of the following: (i) A detailed assessment of the large developer's steps to comply with the requirements of this section; (ii) If applicable, any identified instances of noncompliance with the requirements of this section, and any recommendations for how the developer can improve its policies and processes for ensuring compliance with the requirements of this section; (iii) A detailed assessment of the large developer's internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the large developer, its employees, and its contractors; and (iv) The signature of the lead auditor certifying the results of the audit. (d) The large developer shall retain an unredacted copy of the report for as long as a frontier model is deployed plus five years. (e) (i) The large developer shall conspicuously publish a copy of the third party's report with appropriate redactions and transmit a copy of such redacted report to the division of homeland security and emergency services. (ii) The large developer shall grant the division of homeland security and emergency services or the attorney general access to the third party's report, with redactions only to the extent required by federal law, upon request.
D. All documentation shall comply with state and federal medical record-keeping requirements and be accessible for regulatory review. Documentation of relevant instances where a qualified end-user overrides or disagrees with AI device-generated outputs must be maintained through a summary report indicating the frequency and nature of overrides. Deployers shall document the percentage or number of such overrides or disagreements.
A. Deployers of any artificial intelligence (AI) device shall establish an AI governance group with representation from qualified end-users. This governance group is responsible for overseeing compliance with this act.
B. Deployers shall maintain an updated inventory of deployed AI devices, with device instructions for use and any relevant safety and effectiveness documentation made accessible to all qualified end-users of the device.
E. Deployers shall document the use case and user training procedure for the AI device.
§ 3506. Retention of records. The department shall establish a record retention policy and determine the amount of time a facility shall retain records related to artificial-intelligence algorithms. The department may request input from facilities and health care providers or their representatives in making the determination under this section.
(d) Documentation.--A supplier shall maintain documentation regarding the development and implementation of the chatbot that describes: (1) Foundation models used in development. (2) Training data used. (3) Compliance with Federal and State privacy law. (4) Consumer data collection and sharing practices. (5) Ongoing efforts to ensure accuracy, reliability, fairness and safety.
(g) Compliance.--A supplier shall comply with the requirements of the policy filed in accordance with this section.
The department shall establish a record retention policy and determine the amount of time a facility shall retain records related to artificial-intelligence algorithms. The department may request input from facilities and health care providers or their representatives in making the determination under this section.
The department shall establish a record retention policy and determine the amount of time an insurer shall retain records. The department may request input from insurers or their representatives in making this determination.
The department shall establish a record retention policy and determine the amount of time an MA or CHIP managed care plan shall retain records. The department may request input from an MA or CHIP managed care plan or their representative to make this determination.
Insurers shall maintain documentation of artificial intelligence decisions for at least five (5) years including adverse benefit determinations where artificial intelligence made, or was a substantial factor in making, the adverse benefit determination.
(d) An employer shall establish, maintain, and preserve for five (5) years contemporaneous, true, and accurate records of data gathered through the use of an electronic monitoring tool and used in a hiring, promotion, termination, disciplinary or compensation decision to ensure compliance with the employee or their authorized representative or the department requests for data. The employer shall destroy any employee information collected via an electronic monitoring tool no later than sixty-one (61) months after collection unless the employee has provided written and informed consent to the retention of their data by the employer. An employer shall establish, implement and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of employee data, appropriate to the volume and nature of the employee data at issue. An employee shall have the right to request corrections to erroneous employee data.
Insurers shall maintain documentation of artificial intelligence decisions for at least five (5) years including adverse benefit determinations where artificial intelligence made, or was a substantial factor in making, the adverse benefit determination.
(d) An employer shall establish, maintain, and preserve for five (5) years contemporaneous, true, and accurate records of data gathered through the use of an electronic monitoring tool and used in a hiring, promotion, termination, disciplinary or compensation decision to ensure compliance with the employee or their authorized representative or the department requests for data. The employer shall destroy any employee information collected via an electronic monitoring tool no later than sixty-one (61) months after collection unless the employee has provided written and informed consent to the retention of their data by the employer. An employer shall establish, implement and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of employee data, appropriate to the volume and nature of the employee data at issue. An employee shall have the right to request corrections to erroneous employee data.
(D) A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program must be written and made publicly available on the chatbot provider's website.
(D) A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program must be written and made publicly available on the chatbot provider's website.
(B)(1) Except as provided in subsection (F), a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable considering: (a)(i) The guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (ii) any risk management framework for artificial intelligence systems that the Attorney General, in his discretion, may designate; (b) the size and complexity of the deployer; (c) the nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and (d) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer. (2) A risk management policy and program implemented pursuant to item (1) may cover multiple high-risk artificial intelligence systems deployed by the deployer.
Sec. 2054.702. ARTIFICIAL INTELLIGENCE SYSTEM CODE OF ETHICS. (a) The department by rule shall establish an artificial intelligence system code of ethics for use by state agencies and local governments that procure, develop, deploy, or use artificial intelligence systems. (b) At a minimum, the artificial intelligence system code of ethics must include guidance for the deployment and use of artificial intelligence systems and heightened scrutiny artificial intelligence systems that aligns with the Artificial Intelligence Risk Management Framework (AI RMF 1.0) published by the National Institute of Standards and Technology. The guidance must address: (1) human oversight and control; (2) fairness and accuracy; (3) transparency, including consumer disclosures; (4) data privacy and security; (5) public and internal redress, including accountability and liability; and (6) the frequency of evaluations and documentation of improvements. (c) State agencies and local governments shall adopt the code of ethics developed under this section.
Sec. 2054.703. MINIMUM STANDARDS FOR HEIGHTENED SCRUTINY ARTIFICIAL INTELLIGENCE SYSTEMS. (a) The department by rule shall develop minimum risk management and governance standards for the development, procurement, deployment, and use of heightened scrutiny artificial intelligence systems by a state agency or local government. (b) The minimum standards must be consistent with the Artificial Intelligence Risk Management Framework (AI RMF 1.0) published by the National Institute of Standards and Technology and must: (1) establish accountability measures, such as required reports describing the use of, limitations of, and safeguards for the heightened scrutiny artificial intelligence system; (2) require the assessment and documentation of the heightened scrutiny artificial intelligence system's known security risks, performance metrics, and transparency measures: (A) before deploying the system; and (B) at the time any material change is made to: (i) the system; (ii) the state or local data used by the system; or (iii) the intended use of the system; (3) provide to local governments resources that advise on managing, procuring, and deploying a heightened scrutiny artificial intelligence system, including data protection measures and employee training; and (4) establish guidelines for: (A) risk management frameworks, acceptable use policies, and training employees; and (B) mitigating the risk of unlawful harm by contractually requiring vendors to implement risk management frameworks when deploying heightened scrutiny artificial intelligence systems on behalf of state agencies or local governments. (c) State agencies and local governments shall adopt the standards developed under Subsection (a).
(a-1) A state agency with 150 or fewer full-time employees may: (1) designate a full-time employee of the agency to serve as a data management officer; or (2) enter into an agreement with one or more state agencies to jointly employ a data management officer if approved by the department. (c) In accordance with department guidelines, the data management officer for a state agency shall annually post on the Texas Open Data Portal established by the department under Section 2054.070 at least three high-value data sets as defined by Section 2054.1265. The high-value data sets may not include information that is confidential or protected from disclosure under state or federal law.
(2) A participant shall: (a) provide required information to state agencies in accordance with the terms of the participation agreement; and (b) report to the office as required in the participation agreement. ... (4) A participant shall retain records as required by office rule or the participation agreement.
(4) A regulatory mitigation agreement between a participant and the office and relevant agencies shall specify: (a) limitations on scope of the use of the participant's artificial intelligence technology, including: (i) the number and types of users; (ii) geographic limitations; and (iii) other limitations to implementation; (b) safeguards to be implemented; and (c) any regulatory mitigation granted to the applicant. ... (6) A participant remains subject to all legal and regulatory requirements not expressly waived or modified by the terms of the regulatory mitigation agreement.
E. The first draft of any report or record created in whole or in part by using generative artificial intelligence shall be retained for as long as the final report is retained. The program used to generate a draft or final report shall maintain an audit trail that, at a minimum, identifies (i) the person who used artificial intelligence to create or edit the report; (ii) any changes made to the report following the initial draft; and (iii) the video and audio footage used to create a report, if any.
Each carrier shall ... (iii) maintain documentation of AI decisions for at least three years;
(a) Each developer or deployer of automated decision systems used in consequential decisions shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of the automated decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under section 4193b of this title. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of an automated decision system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this subsection shall be reasonable considering the: (1) guidance and standards set forth in version 1.0 of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology in the U.S. Department of Commerce, or the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology if, in the Attorney General's discretion, the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology in the U.S. Department of Commerce is at least as stringent as version 1.0; (2) size and complexity of the developer or deployer; (3) nature, scope, and intended uses of the automated decision system developed or deployed for use in consequential decisions; and (4) sensitivity and volume of data processed in connection with the automated decision system. (b) A risk management policy and program implemented pursuant to subsection (a) of this section may cover multiple automated decision systems developed by the same developer or deployed by the same deployer for use in consequential decisions if sufficient.
(b) No deployer shall deploy an inherently dangerous artificial intelligence system or an artificial intelligence system that creates reasonably foreseeable risks pursuant to section 4193f of this subchapter unless the deployer has designed and implemented a risk management policy and program for the model or system. The risk management policy shall specify the principles, processes, and personnel that the deployer shall use in maintaining the risk management program to identify, mitigate, and document any risk that is a reasonably foreseeable consequence of deploying or using the system. Each risk management policy and program designed, implemented, and maintained pursuant to this subsection shall be: (1) at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the NIST; and (2) reasonable considering: (A) the size and complexity of the deployer; (B) the nature and scope of the system, including the intended uses and unintended uses and the modifications made to the system by the deployer; and (C) the data that the system, once deployed, processes as inputs.
(d) Data security program. A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of the personal data and chat logs maintained by the chatbot provider. The program shall be written and made publicly available on the chatbot provider's website.
(a) It is an affirmative defense to liability in an action for unlawful or unprofessional conduct brought against a supplier by the Office of Professional Regulation or the Board of Medical Practice if the supplier demonstrates that the supplier meets all of the following conditions: (1) the supplier created, maintained, and implemented a policy that meets the requirements of subsection (b) of this section; (2) the supplier maintains documentation regarding the development and implementation of the mental health chatbot that describes: (A) foundation models used in development; (B) training tools used; (C) compliance with federal health privacy regulations; (D) user data collection and sharing practices; and (E) ongoing efforts to ensure accuracy, reliability, fairness, and safety; (3) the supplier filed the policy with the Office of the Attorney General; and (4) the supplier complied with all requirements of the filed policy at the time of the alleged violation. (b) A policy described in subdivision (a)(1) of this section shall meet all of the following requirements: (1) be in writing; (2) clearly state: (A) the intended purposes of the mental health chatbot; and (B) the abilities and limitations of the mental health chatbot; (3) describe the procedures by which the supplier: (A) ensures that qualified mental health providers licensed in Vermont or in one or more other states, or both, are involved in the development and review process; (B) ensures that the mental health chatbot is developed and monitored in a manner consistent with clinical best practices; (C) conducts testing prior to making the mental health chatbot publicly available and regularly thereafter to ensure that the output of the mental health chatbot poses no greater risk to a user than that posed to an individual in psychotherapy with a licensed mental health provider; (D) identifies reasonably foreseeable adverse outcomes to and potentially harmful interactions with users that could result from using the mental health chatbot; (E) provides a mechanism for a user to report any potentially harmful interactions from use of the mental health chatbot; (F) implements protocols to assess and respond to risk of harm to users or other individuals; (G) details actions taken to prevent or mitigate any such adverse outcomes or potentially harmful interactions; (H) implements protocols to respond in real time to acute risk of physical harm; (I) reasonably ensures regular, objective reviews of safety, accuracy, and efficacy, which may include internal or external audits; (J) provides users any necessary instructions on the safe use of the mental health chatbot; (K) ensures users understand that they are interacting with artificial intelligence; (L) ensures users understand the intended purpose, capabilities, and limitations of the mental health chatbot; (M) prioritizes user mental health and safety over engagement metrics or profit; (N) implements measures to prevent discriminatory treatment of users; and (O) ensures compliance with the security and privacy protections of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A, C, and E, as if the supplier were a covered entity, and applicable consumer protection requirements, including sections 9761-9763 of this subchapter.
(5) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
(6) For a disclosure required pursuant to this section, a developer shall, no later than 90 days after the developer performs an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
(2)(a) A deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has designed and implemented a risk management policy and program for such high-risk artificial intelligence system. The risk management policy must specify the principles, processes, and personnel that the deployer must use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk artificial intelligence system to make a consequential decision. (b) A risk management policy and program designed, implemented, and maintained pursuant to this section is presumed to be in conformity with related requirements set out in this section if the policy and program align with the guidance and standards set forth in the latest version of: (i) The artificial intelligence risk management framework published by the national institute of standards and technology; (ii) Standard ISO/IEC 42001 of the international organization for standardization; or (iii) A nationally or internationally recognized risk management framework for artificial intelligence systems with requirements that are substantially equivalent to, and at least as stringent as, the guidance and standards described in (b)(i) and (ii) of this subsection (2). (c) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
(7) For a disclosure required pursuant to this section, each deployer shall, no later than 30 days after the deployer is notified by the developer that the developer has performed an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
(1) Beginning July 1, 2027, and except as provided in section 5(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the deployer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer.
(5) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
(2)(a) A deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has designed and implemented a risk management policy and program for such high-risk artificial intelligence system. The risk management policy must specify the principles, processes, and personnel that the deployer must use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk artificial intelligence system to make a consequential decision. (b) A risk management policy and program designed, implemented, and maintained pursuant to this section is presumed to be in conformity with related requirements set out in this section if the policy and program align with the guidance and standards set forth in the latest version of: (i) The artificial intelligence risk management framework published by the national institute of standards and technology; (ii) Standard ISO/IEC 42001 of the international organization for standardization; or (iii) A nationally or internationally recognized risk management framework for artificial intelligence systems with requirements that are substantially equivalent to, and at least as stringent as, the guidance and standards described in (b)(i) and (ii) of this subsection (2). (c) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
(1) Beginning July 1, 2027, and except as provided in section 6(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the deployer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer. (3) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
(1) Beginning July 1, 2027, and except as provided in section 6(6) of this act, each developer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the developer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the life cycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the developer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the developer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the developer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer. (3) A developer that also serves as a deployer for any high-risk artificial intelligence system may not be required to generate the documentation required by this section unless such high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer or as otherwise required by law. (4) Nothing in this section may be construed to require a developer to disclose any trade secret, or other confidential or proprietary information. (5) This section does not apply to a developer with fewer than 50 full-time equivalent employees.