Organizations developing or deploying AI must establish a formal AI governance program, maintain contemporaneous records of AI system design, testing, and deployment decisions, and designate a responsible individual or office for AI governance. Program establishment is not a one-time exercise — ongoing maintenance, recordkeeping, and accountability designation are continuing obligations.
A chatbot provider shall develop, implement and maintain a comprehensive data security program that contains administrative, technical and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program shall be written and made publicly available on the chatbot provider's website.
(a) An operator shall document all of the following with respect to any companion chatbot that the operator makes available in this state: (1) The existence of a graduated response system. (2) All credible crisis expressions detected by the companion chatbot. (3) The duration and conditions of a crisis interruption pause initiated by the companion chatbot.
(a) Beginning on the date that is 180 days after the Attorney General adopts regulations pursuant to Section 22615, and annually thereafter, an operator shall submit to an independent audit assessing the operator's compliance with this chapter. (b) Within 90 days of completing an independent audit pursuant to subdivision (a), the auditor shall submit an AI child safety audit report to the Attorney General for any audited companion chatbot. (c) (1) Notwithstanding any other law, except as provided in paragraph (2), an AI child safety audit report submitted pursuant to this section is confidential. (2) The Attorney General may disclose specific information from an AI child safety audit report to any of the following: (A) A government agency or a public prosecutor in the state as necessary for enforcement purposes. (B) A qualified researcher conducting a study on child safety, subject to confidentiality agreements and data protection requirements set by the Attorney General. (C) An independent child safety organization or advocacy group for the purpose of developing safety standards or educational resources, subject to appropriate confidentiality protections.
(a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system. (b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following: (1) The use, or intended use, of the high-risk automated decision system. (2) The size, complexity, and resources of the deployer or developer. (3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system. (4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system.
(a) A covered deployer conducting business in this state shall have a duty to protect personal information held by the covered deployer as provided by this section. (b) A covered deployer whose high-risk artificial intelligence systems process personal information shall develop, implement, and maintain a comprehensive information security program that is written in one or more readily accessible parts and contains administrative, technical, and physical safeguards that are appropriate for all of the following: (1) The covered deployer's size, scope, and type of business. (2) The amount of resources available to the covered deployer. (3) The amount of data stored by the covered deployer. (4) The need for security and confidentiality of personal information stored by the covered deployer. (c) The comprehensive information security program required by subdivision (a) shall meet all of the following requirements: (1) The program shall incorporate safeguards that are consistent with the safeguards for the protection of personal information and information of a similar character under state or federal laws and regulations applicable to the covered deployer. (2) The program shall include the designation of one or more employees of the covered deployer to maintain the program. (3) The program shall require the identification and assessment of reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of any electronic, paper, or other record containing personal information, and the establishment of a process for evaluating and improving, as necessary, the effectiveness of the current safeguards for limiting those risks, including by all of the following: (A) Requiring ongoing employee and contractor education and training, including education and training for temporary employees and contractors of the covered deployer, on the proper use of security procedures and protocols and the importance of personal information security. (B) Mandating employee compliance with policies and procedures established under the program. (C) Providing a means for detecting and preventing security system failures. (5) The program shall provide disciplinary measures for violations of a policy or procedure established under the program. (6) The program shall include measures for preventing a terminated employee from accessing records containing personal information. (9) The program shall include regular monitoring to ensure that the program is operating in a manner reasonably calculated to prevent unauthorized access to or unauthorized use of personal information and, as necessary, upgrading information safeguards to limit the risk of unauthorized access to or unauthorized use of personal information. (10) The program shall require the regular review of the scope of the program's security measures that must occur subject to both of the following timeframes: (A) At least annually. (B) Whenever there is a material change in the covered deployer's business practices that may reasonably affect the security or integrity of records containing personal information. (11) The program shall require the documentation of responsive actions taken in connection with any incident involving a breach of security, including a mandatory postincident review of each event and the actions taken, if any, in response to that event to make changes in business practices relating to protection of personal information.
(2) The program shall include the designation of one or more employees of the covered deployer to maintain the program.
(4) The program shall include security policies for the covered deployer's employees relating to the storage, access, and transportation of records containing personal information outside of the covered deployer's physical business premises. (7) The program shall provide policies for the supervision of third-party service providers that include both of the following: (A) Taking reasonable steps to select and retain third-party service providers that are capable of maintaining appropriate security measures to protect personal information consistent with applicable law. (B) Requiring third-party service providers by contract to implement and maintain appropriate security measures for personal information. (8) The program shall provide reasonable restrictions on physical access to records containing personal information, including by requiring the records containing the data to be stored in a locked facility, storage area, or container.
(12) The program shall, to the extent feasible, include all of the following procedures and protocols with respect to computer system security requirements or procedures and protocols providing a higher degree of security, for the protection of personal information: (A) The use of secure user authentication protocols that include all of the following features: (i) The control of user login credentials and other identifiers. (ii) The use of a reasonably secure method of assigning and selecting passwords or using unique identifier technologies, which may include biometrics or token devices. (iii) The control of data security passwords to ensure that the passwords are kept in a location and a format that do not compromise the security of the data the passwords protect. (iv) The restriction of access to only active users and active user accounts. (v) The blocking of access to user credentials or identification after multiple unsuccessful attempts to gain access. (B) The use of secure access control measures that include both of the following: (i) The restriction of access to records and files containing personal information to only employees or contractors who need access to that personal information to perform the job duties of the employees or contractors. (ii) The assignment of a unique identification and a password to each employee or contractor with access to a computer containing personal information, that may not be a vendor-supplied default password, or the use of another protocol reasonably designed to maintain the integrity of the security of the access controls to personal information. (C) The encryption of both of the following: (i) Transmitted records and files containing personal information that will travel across public networks. (ii) Data containing personal information that is transmitted wirelessly. (D) The use of reasonable monitoring of systems for unauthorized use of or access to personal information. (E) The encryption of all personal information stored on laptop computers or other portable devices. (F) For files containing personal information on a system that is connected to the internet, the use of reasonably current firewall protection and operating system security patches that are reasonably designed to maintain the integrity of the personal information. (G) The use of both of the following: (i) A reasonably current version of system security agent software that shall include malware protection and reasonably current patches and virus definitions. (ii) A version of a system security agent software that is supportable with current patches and virus definitions, and is set to receive the most current security updates on a regular basis.
(f) (1) When a frontier developer publishes documents to comply with this section, the frontier developer may make redactions to those documents that are necessary to protect the frontier developer's trade secrets, the frontier developer's cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. (2) If a frontier developer redacts information in a document pursuant to this subdivision, the frontier developer shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted information for five years.
A large frontier developer shall write, implement, comply with, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer's frontier models and describes how the large frontier developer approaches all of the following: (1) Incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework. (2) Defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds. (3) Applying mitigations to address the potential for catastrophic risks based on the results of assessments undertaken pursuant to paragraph (2). (4) Reviewing assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally. (5) Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks. (6) Revisiting and updating the frontier AI framework, including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures pursuant to subdivision (c). (7) Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties. (8) Identifying and responding to critical safety incidents. (9) Instituting internal governance practices to ensure implementation of these processes. (10) Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.
A large frontier developer shall review and, as appropriate, update its frontier AI framework at least once per year.
If a large frontier developer makes a material modification to its frontier AI framework, the large frontier developer shall clearly and conspicuously publish the modified frontier AI framework and a justification for that modification within 30 days.
(B) A large frontier developer shall not make a materially false or misleading statement about its implementation of, or compliance with, its frontier AI framework... (2) This subdivision does not apply to a statement that was made in good faith and was reasonable under the circumstances.
A frontier developer shall not make, adopt, enforce, or enter into a rule, regulation, policy, or contract that prevents a covered employee from disclosing, or retaliates against a covered employee for disclosing, information to the Attorney General, a federal authority, a person with authority over the covered employee, or another covered employee who has authority to investigate, discover, or correct the reported issue, if the covered employee has reasonable cause to believe that the information discloses either of the following: (1) The frontier developer's activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk. (2) The frontier developer has violated Chapter 25.1 (commencing with Section 22757.10) of Division 8 of the Business and Professions Code.
A frontier developer shall not enter into a contract that prevents a covered employee from making a disclosure protected under Section 1102.5.
A frontier developer shall provide a clear notice to all covered employees of their rights and responsibilities under this section, including by doing either of the following: (1) At all times posting and displaying within any workplace maintained by the frontier developer a notice to all covered employees of their rights under this section, ensuring that any new covered employee receives equivalent notice, and ensuring that any covered employee who works remotely periodically receives an equivalent notice. (2) At least once each year, providing written notice to each covered employee of the covered employee's rights under this section and ensuring that the notice is received and acknowledged by all of those covered employees.
A large frontier developer shall provide a reasonable internal process through which a covered employee may anonymously disclose information to the large frontier developer if the covered employee believes in good faith that the information indicates that the large frontier developer's activities present a specific and substantial danger to the public health or safety resulting from a catastrophic risk or that the large frontier developer violated Chapter 25.1 (commencing with Section 22757.10) of Division 8 of the Business and Professions Code, including a monthly update to the person who made the disclosure regarding the status of the large frontier developer's investigation of the disclosure and the actions taken by the large frontier developer in response to the disclosure. (2)(A) Except as provided in subparagraph (B), the disclosures and responses of the process required by this subdivision shall be shared with officers and directors of the large frontier developer at least once each quarter. (B) If a covered employee has alleged wrongdoing by an officer or director of the large frontier developer in a disclosure or response, subparagraph (A) shall not apply with respect to that officer or director.
(e) THE ARTIFICIAL INTELLIGENCE SYSTEM PRODUCES AND RETAINS DOCUMENTATION, AUDIT LOGS, AND MODEL-GOVERNANCE RECORDS IN ORDER TO DEMONSTRATE COMPLIANCE WITH THIS SECTION AND SECTION 10-3-1104.9;
(2) (a) On and after June 30, 2026, and except as provided in subsection (6) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (2) must be reasonable considering:
(b) Except as provided in Code Section 10-16-6, a deployer of an automated decision system shall implement a risk management policy and program to govern the deployer's deployment of the automated decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of an automated decision system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection shall take into consideration: (1) Either: (A) The guidance and standards set forth in the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology of the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (B) Any risk management framework for artificial intelligence systems that the Attorney General, in the Attorney General's discretion, may designate; (2) The size and complexity of the deployer; (3) The nature and scope of the automated decision systems deployed by the deployer, including the intended uses of the automated decision systems; and (4) The sensitivity and volume of data processed in connection with the automated decision systems deployed by the deployer. (c) A risk management policy and program implemented pursuant to this Code section may cover multiple automated decision systems deployed by the deployer.
Each deployer shall establish and adhere to: (1) Written standards, policies, procedures, and protocols for the acquisition, use of, or reliance on automated decision systems developed by third-party developers, including reasonable contractual controls ensuring that the developer statements and summaries described in subsection (b) of Code Section 10-16-2 include all information necessary for the deployer to fulfill its obligations under this Code section; (2) Procedures for reporting any incorrect information or evidence of algorithmic discrimination to a developer for further investigation and mitigation, as necessary; and (3) Procedures to remediate and eliminate incorrect information from its automated decision systems that the deployer has identified or has been reported to a developer.
(4) Maintain: (A) An updated inventory of the artificial intelligence systems; (B) Documentation on the system design, intended use, and training data of the artificial intelligence systems; (C) Record of the monitoring, performance evaluations, and oversight activities; and (D) Documentation of findings and actions taken to address any deficiencies identified through the monitoring or performance evaluations.
5. An employer shall maintain an updated list of all automated decision systems currently in use by the employer to facilitate implementation of this section.
1. a. A private entity in possession of biometric data shall develop a written policy to establish a schedule for how long the private entity will retain biometric data before the private entity destroys the biometric data. b. A written policy shall be available to the public. c. A private entity shall not retain biometric data for more than three years after the subject of the biometric data last interacts with the private entity or until the purposes for which the biometric data was collected have been accomplished, whichever is longer.
(b) A health insurance issuer shall ensure that its health insurance coverage is administered in conformity with this Act. The health insurance issuer's AI systems program shall include policies and procedures to ensure such conformity by all employees, directors, trustees, agents, representatives, and persons directly or indirectly contracted to administer the health insurance coverage. The health insurance issuer shall be responsible for any noncompliance under this Act with respect to its health insurance coverage. Nothing in this Section relieves any other person from liability for failure to comply with the Department's investigations or market conduct actions related to a health insurance issuer's compliance with this Act.
To address the concerns detailed in the findings in Section 5 of this Act and to ensure that negative impacts of AI system use are prevented, the Department of Innovation and Technology shall adopt rules as may be necessary to ensure that businesses using AI systems are compliant with the 5 principles of AI governance as follows: (1) Safety: Ensuring systems operate without causing harm to individuals. (2) Transparency: Providing clear and understandable explanations of how systems work and make decisions. (3) Accountability: Identifying and holding individuals or companies responsible for the system's performance and outcomes. (4) Fairness: Preventing and mitigating bias to ensure equitable treatment for all individuals. (5) Contestability: Allowing individuals to challenge and seek redress for decisions made by the system.
(a) A deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to map, measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination associated with the use or intended use of an automated decision tool. The safeguards required by this subsection shall be appropriate to all of the following: (1) the use or intended use of the automated decision tool; (2) the deployer's role as a deployer; (3) the size, complexity, and resources of the deployer; (4) the nature, context, and scope of the activities of the deployer in connection with the automated decision tool; and (5) the technical feasibility and cost of available tools, assessments, and other means used by a deployer to map, measure, manage, and govern the risks associated with an automated decision tool. (b) The governance program required by this Section shall be designed to do all of the following: (1) identify and implement safeguards to address reasonably foreseeable risks of algorithmic discrimination resulting from the use or intended use of an automated decision tool; (2) if established by a deployer, provide for the performance of impact assessments as required by Section 10; (3) conduct an annual and comprehensive review of policies, practices, and procedures to ensure compliance with this Act; (4) maintain for 2 years after completion the results of an impact assessment; and (5) evaluate and make reasonable adjustments to administrative and technical safeguards in light of material changes in technology, the risks associated with the automated decision tool, the state of technical standards, and changes in business arrangements or operations of the deployer. (c) A deployer shall designate at least one employee to be responsible for overseeing and maintaining the governance program and compliance with this Act. An employee designated under this subsection shall have the authority to assert to the employee's employer a good faith belief that the design, production, or use of an automated decision tool fails to comply with the requirements of this Act. An employer of an employee designated under this subsection shall conduct a prompt and complete assessment of any compliance issue raised by that employee. (d) This Section does not apply to a deployer with fewer than 25 employees unless, as of the end of the prior calendar year, the deployer deployed an automated decision tool that impacted more than 999 people per year.
(a) At least once every 2 years, an operator shall obtain an independent, third-party audit to assess the operator's compliance with this Act. The operator shall make publicly available on its website a high-level summary of the audit's findings, excluding confidential or proprietary information.
(a) Risk Management Policy: Deployers of high-risk AI systems must implement and maintain a risk management program that: (1) Identifies and mitigates known or foreseeable risks of algorithmic discrimination; (2) Aligns with industry standards, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework.
(b) (1) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (b) must be reasonable considering: (i) (A) the guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (B) any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer. (2) a risk management policy and program implemented pursuant to subsection (b)(1) of this section may cover multiple high-risk artificial intelligence systems deployed by the deployer.
(c) An employer shall establish, maintain, and preserve for three years contemporaneous, true, and accurate records of data collected via an electronic monitoring tool to ensure compliance with employee or commissioner requests for data. The employer shall destroy any employee information collected via an electronic monitoring tool no later than thirty-seven months after collection unless the employee has provided written and informed consent to the retention of their data by the employer. An employer shall establish, implement and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of employee data appropriate to the volume and nature of the employee data at issue. An employee shall have the right to request corrections to erroneous employee data.
(c) An employer or its vendor shall retain all documentation pertaining to the design, development, use, and data of an automated employment decision tool that may be necessary to conduct an impact assessment. To the extent held by a vendor, the employer shall be granted a license to access this documentation and share this documentation with a labor organization to the extent required by federal or state law, or to the extent required by a court or agency in connection with employment or labor litigation. This includes but is not limited to the source of the data used to develop the tool, the technical specifications of the tool, individuals involved in the development of the tool, and historical use data for the tool. Such documentation must include a historical record of versions of the tool, such that an employer shall be able to attest in the event of litigation disputing an employment decision, the nature and specifications of the tool as it was used at the time of that employment decision. Such documentation shall be stored in accordance with such record-keeping, data retention, and security requirements as the commissioner may specify, and in such a manner as to be legible and accessible to the party conducting an impact assessment. (d) If an initial or subsequent impact assessment requires the collection of employee data to assess a tool's disparate impact on employees, such data shall be collected, processed, stored, retained, and disposed of in such a manner as to protect the privacy of employees, and shall comply with any data retention and security requirements specified by the commissioner. Employee data provided to auditors for the purpose of an impact assessment shall not be shared with the employer, nor shall it be shared with any person, business entity, or other organization unless strictly necessary for the completion of the impact assessment.
(a) A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the person from whom biometric information is to be collected or was collected, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 1 year of the individual's last interaction with the private entity, whichever occurs first. Absent a valid order, warrant, or subpoena issued by a court of competent jurisdiction or a local or federal governmental agency, a private entity in possession of biometric identifiers or biometric information must comply with its established retention schedule and destruction guidelines.
(d) Record and retain for 5 years any specific tests used and results obtained as a part of an assessment of critical risk with sufficient detail for qualified third parties to replicate the testing. (3) If a large developer publishes a document in accordance with the requirements of this act, the large developer shall publish the information on a conspicuous page on the large developer's website. The large developer may redact the document as reasonably necessary to protect the large developer's trade secrets, public safety, or national security, or to comply with applicable law. An auditor required to perform an audit and produce a report under section 9 may redact information from the report using the same procedure described in this subsection before the publication of that report under section 9(3). (4) If a large developer or auditor makes a redaction under subsection (3), the large developer or auditor shall do both of the following: (a) Retain an unredacted version of the document for not less than 5 years and provide the attorney general with the ability to inspect the unredacted document on request. (b) Describe the character and justification of the redactions in the published version of the document.
(1) Beginning on January 1, 2026, not less than once per year, a large developer shall retain a reputable third-party auditor to produce a report that assesses all of the following: (a) If the large developer has complied with the large developer's safety and security protocol and any instances of noncompliance. (b) Any instance where the large developer's safety and security protocol was not stated clearly enough to determine if the large developer has complied with the safety and security protocol. (c) Any instance that the auditor believes the large developer violated section 7(2), (3), or (4). (2) A large developer shall grant the auditor access to all materials produced to comply with this act and any other materials reasonably necessary to perform the assessment under subsection (1). (3) Not more than 90 days after the completion of the auditor's report under subsection (1), a large developer shall conspicuously publish that report. (4) In conducting an audit under this section, an auditor shall employ or contract 1 or more individuals with expertise in corporate compliance and 1 or more individuals with technical expertise in the safety of foundation models.
Sec. 7. (1) An employer that collects a covered individual's data shall retain the data for not more than 3 years after the date on which the purpose for using the electronic monitoring tool or automated decisions tool is achieved, unless otherwise specified by a collective bargaining agreement. If the employer does not use any specific data of a covered individual, the employer must delete that data immediately. (2) An employer shall not sell or license a covered individual's data, including, but not limited to, data that is deidentified or aggregated. (3) An employer shall not share data collected under section 4 or 5 with this state or a local unit of government unless otherwise necessary to do any of the following: (a) Provide information to the department. (b) Comply with the requirements of federal, state, or local law. (c) Comply with a court-issued subpoena, warrant, or order.
(4) An employer shall retain all documentation pertaining to the design, development, use, and data of an electronic monitoring tool or automated decisions tool that may be necessary to conduct an impact assessment. The documentation includes, but is not limited to, the source of the data used to develop the tool, the technical specifications of the tool, individuals involved in the development of the tool, historical use data for the tool, and a historical record of the versions of the tool the employer uses. (5) A service provider that contracts with an employer to provide electronic monitoring or automated decisions shall allow the employer access to the documentation described in subsection (4). (6) An employer shall share the documentation described in subsection (4) with a labor organization as required under law or as required by a court or agency in connection with any employment or labor litigation to which the employer is a party. (7) The documentation described in subsection (4) must be stored in manner as prescribed by the director. The director shall prescribe the manner so that the documentation is legible and accessible to the party that conducts an impact assessment of the tool.
Subdivision 1. Data records. (a) Employers must maintain records of worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer for 36 months after the data's most recent collection, production, or use to ensure compliance with requests for data from workers or the commissioner of labor and industry. (b) Employers must destroy any worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer no later than 37 months after its most recent collection, production, or use, unless the worker has provided written and informed consent to the retention of the worker's data by the employer. (c) Employers must protect the confidentiality, integrity, and accessibility of worker data using data security practices consistent with data and cyber privacy laws and appropriate to the volume and nature of the worker data collected.
(a) A developer must (1) conduct an annual review of the safety and security protocol required under this section to account for changes to the capabilities of the artificial intelligence model and industry best practices; and (2) modify the safety and security protocol. (b) If a material modification is made to the safety and security protocol, the developer must publish the safety and security protocol in the same manner required under subdivision 1, clause (3).
A developer must not knowingly make false or materially misleading statements or omissions in or regarding documents produced under this section.
(a) A developer must (1) conduct an annual review of the safety and security protocol required under this section to account for changes to the capabilities of the artificial intelligence model and industry best practices; and (2) modify the safety and security protocol. (b) If a material modification is made to the safety and security protocol, the developer must publish the safety and security protocol in the same manner required under subdivision 1, clause (3).
Subdivision 1. Data records. (a) Employers must maintain records of worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer for 36 months after the data's most recent collection, production, or use to ensure compliance with requests for data from workers or the commissioner of labor and industry. (b) Employers must destroy any worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer no later than 37 months after its most recent collection, production, or use, unless the worker has provided written and informed consent to the retention of the worker's data by the employer. (c) Employers must protect the confidentiality, integrity, and accessibility of worker data using data security practices consistent with data and cyber privacy laws and appropriate to the volume and nature of the worker data collected.
1. Any private entity in possession of biometric identifiers or biometric information shall develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within one year of the individual's last interaction with the private entity, whichever occurs first. Absent a valid warrant or subpoena issued by a court of competent jurisdiction, a private entity in possession of biometric identifiers or biometric information shall comply with its established retention schedule and destruction guidelines.
(e) A licensee shall conduct regular inspections and perform an annual third-party audit. Results of all inspections and audits must be made available to the Department. (f) A licensee shall implement continuous monitoring systems for safety and risk indicators and submit quarterly performance reports including incident reports.
(6)(a) When a large frontier developer or large chatbot provider publishes documents to comply with this section, the large frontier developer or large chatbot provider may make redactions to those documents that are necessary to protect the large frontier developer's trade secrets, the large frontier developer's or large chatbot provider's cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. (b) If a large frontier developer or large chatbot provider redacts information in a document pursuant to subdivision (6)(a) of this section, the large frontier developer or large chatbot provider shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted information for five years.
(2)(a) Except as otherwise provided in subsection (6) of this section, on and after February 1, 2026, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. High-risk artificial intelligence systems that are in conformity with the guidance and standards set forth in the following as of January 1, 2025, shall be presumed to be in conformity with this section: (i) The Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology; or (ii) The standard ISO/IEC 42001 of the International Organization for Standardization. (b) Any risk management policy and program implemented pursuant to subdivision (a) of this subsection may cover multiple high-risk artificial intelligence systems deployed by the deployer.
(a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer of a high-risk artificial intelligence decision system shall implement and maintain a risk management policy and program to govern such deployer's deployment of the high-risk artificial intelligence decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer shall use to identify, document, and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy shall be the product of an iterative process, the risk management program shall be an iterative process and both the risk management policy and program shall be planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence decision system. Each risk management policy and program implemented and maintained pursuant to this subdivision shall be reasonable, considering: (i) the guidance and standards set forth in the latest version of: (A) the "Artificial Intelligence Risk Management Framework" published by the national institute of standards and technology; (B) ISO or IEC 42001 of the international organization for standardization; or (C) a nationally or internationally recognized risk management framework for artificial intelligence decision systems, other than the guidance and standards specified in clauses (A) and (B) of this subparagraph, that imposes requirements that are substantially equivalent to, and at least as stringent as, the requirements established pursuant to this section for risk management policies and programs; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence decision systems deployed by the deployer, including, but not limited to, the intended uses of such high-risk artificial intelligence decision systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence decision systems deployed by the deployer. (b) A risk management policy and program implemented and maintained pursuant to paragraph (a) of this subdivision may cover multiple high-risk artificial intelligence decision systems deployed by the deployer.
(a) create and maintain technical documentation for the general-purpose artificial intelligence model, which shall: (i) include: (A) the training and testing processes for such general-purpose artificial intelligence model; and (B) the results of an evaluation of such general-purpose artificial intelligence model performed to determine whether such general-purpose artificial intelligence model is in compliance with the provisions of this article; (ii) include, as appropriate, considering the size and risk profile of such general-purpose artificial intelligence model, at least: (A) the tasks such general-purpose artificial intelligence model is intended to perform; (B) the type and nature of artificial intelligence decision systems in which such general-purpose artificial intelligence model is intended to be integrated; (C) acceptable use policies for such general-purpose artificial intelligence model; (D) the date such general-purpose artificial intelligence model is released; (E) the methods by which such general-purpose artificial intelligence model is distributed; and (F) the modality and format of inputs and outputs for such general-purpose artificial intelligence model; and (iii) be reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such technical documentation;
(d) A developer that is exempt pursuant to subparagraph (ii) of paragraph (a) of this subdivision shall establish and maintain an artificial intelligence risk management framework, which shall: (i) be the product of an iterative process and ongoing efforts; and (ii) include, at a minimum: (A) an internal governance function; (B) a map function that shall establish the context to frame risks; (C) a risk management function; and (D) a function to measure identified risks by assessing, analyzing and tracking such risks.
§ 516. Ethics and risk management board and reports. 1. Every operator of a licensed high-risk advanced artificial intelligence system or systems shall establish an ethics and risk management board composed of no less than five individuals who shall have the responsibility to assess the ethical implications of all possible use cases of the system, whether such use cases are intended or unintended, and whether likely or unlikely to be used, and the current operational outcomes of the system. Such operator, other than an operator who is a natural person, operating more than one high-risk advanced artificial intelligence system with a supplemental license shall not be required to have more than one ethics and risk management board for each system. 2. No member of an ethics and risk management board shall be a member, officer, or director within the operator's entity. No member shall be required to be employed by the operator. 3. Such board shall adopt rules governing its decision-making processes, duties and responsibilities. Such rules shall not conflict with the provisions of this article. 4. Annually, the ethics and risk management board of each operator shall submit to the secretary a comprehensive report for each licensed high-risk advanced artificial intelligence system which consists of the following: (a) All possible use cases, whether intended or unintended, whether likely or unlikely. (b) A thorough risk assessment for each use case, considering and evaluating the potential for harm, irrespective of the probability of such risk materializing. This shall include, but not be limited to, the system's potential impact on privacy, security, fairness, economic implications, societal well-being, and safety of persons and the environment. (c) A detailed evaluation of known use cases of the system by users, exploring whether certain applications ought to be constrained or banned due to ethical considerations. This shall include an assessment of the operator's capacity to impose such constraints on use cases. (d) A mitigation plan for each identified risk, including preemptive measures, monitoring processes, and responsive actions. This shall also include a communication strategy to inform users and stakeholders about potential risks and steps taken to mitigate them. (e) A comprehensive review of any incidents or failures of the system in the past year, detailing the circumstances, impacts, measures taken to address the issue, and modifications made to prevent such incidents in the future. (f) Any existing attempts to educate users and, based on the existing use of the system by users, a detailed plan on how the operator intends to inform and instruct users on the safe and ethical use of the system, considering varying levels of digital literacy among users. (g) A disclosure of any conflicts of interest within the ethics board, which could potentially influence the board's decisions and recommendations. This shall include measures to manage and resolve such conflicts. (h) An update on the measures taken by the operator to ensure the system's adherence to existing laws, regulations, and ethical guidelines related to artificial intelligence. 5. In addition to any applicable civil penalties pursuant to section five hundred eight of this article, a member of an ethics and risk management board who makes a false statement, fails to disclose conflicts of interest or misrepresents the risks or severity of the risks posed by a system in the performance of their duties as a member of such board, shall be guilty of a misdemeanor and, upon conviction, shall be fined not more than five hundred dollars or imprisoned for not more than six months or both, in the discretion of the court.
§ 524. Logging. Every time a licensee's system operates it shall automatically generate a log. Standards related to the specific types of events that are required to be logged, the format in which logs must be kept, the individuals or entities permitted to access logs and the conditions governing such access, the encryption and cybersecurity protocols to be applied to logs, the procedures for both the preservation and disposal of logs, and any other actions pertinent to log management shall conform to the standards set by the secretary. Such logs shall be preserved for a period of ten years from the date they are generated and shall be subject to inspection under section five hundred twenty-six of this article.
§ 527. Books, records, source code, and logs to be kept. 1. Every operator shall maintain such books, records, source code, and logs as the secretary shall require provided however that every operator shall, at least, maintain a copy of all logs generated from the system as well as a backup of every version of the system which shall be stored in a safe manner as prescribed by the secretary. 2. By a date to be set by the secretary, each operator shall annually file a report with the secretary giving such information as the secretary may require concerning the business and operations during the preceding calendar year of the operator within the state under the authority of this article. Such report shall be subscribed and affirmed as true by the operator under the penalties of perjury and be in the form prescribed by the secretary. In addition to such annual reports, the secretary may require of operators such additional regular or special reports as the secretary may deem necessary to the proper supervision of operators under this article. Such additional reports shall be in the form prescribed by the secretary and shall be subscribed and affirmed as true under the penalties of perjury.
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
(a) Beginning on the effective date of this article, or ninety days after a developer first qualifies as a large developer, whichever is later, a large developer shall annually retain a third party to perform an independent audit of compliance with the requirements of this section. Such third party shall conduct audits consistent with best practices. (b) The third party shall be granted access to unredacted materials as necessary to comply with the third party's obligations under this subdivision. (c) The third party shall produce a report including all of the following: (i) A detailed assessment of the large developer's steps to comply with the requirements of this section; (ii) If applicable, any identified instances of noncompliance with the requirements of this section, and any recommendations for how the developer can improve its policies and processes for ensuring compliance with the requirements of this section; (iii) A detailed assessment of the large developer's internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the large developer, its employees, and its contractors; and (iv) The signature of the lead auditor certifying the results of the audit. (d) The large developer shall retain an unredacted copy of the report for as long as a frontier model is deployed plus five years. (e) (i) The large developer shall conspicuously publish a copy of the third party's report with appropriate redactions and transmit a copy of such redacted report to the division of homeland security and emergency services. (ii) The large developer shall grant the division of homeland security and emergency services or the attorney general access to the third party's report, with redactions only to the extent required by federal law, upon request.
Every person, firm, partnership, association or corporation doing business or offering products to consumers in New York state shall develop a responsible capability scaling policy for the use and development of artificial intelligence by such entity.
1. Developers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A developer of a high-risk AI system shall complete at least: (i) a first audit within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; and (ii) one audit every one year following the submission of the first audit. (b) A developer audit under this section shall include: (i) an evaluation and determination of whether the developer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; and (ii) an evaluation of the developer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 2. Deployers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A deployer of a high-risk AI system shall complete at least: (i) a first audit within six months after initial deployment; (ii) a second audit within one year following the submission of the first audit; and (iii) one audit every two years following the submission of the second audit. (b) A deployer audit under this section shall include: (i) an evaluation and determination of whether the deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; (ii) an evaluation of system accuracy and reliability with respect to such high-risk AI system's deployer-intended and actual use cases; and (iii) an evaluation of the deployer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 3. A deployer or developer may hire more than one auditor to fulfill the requirements of this section. 5. The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer under section eighty-eight of this article. 6. An audit conducted under this section may be completed in part, but shall not be completed entirely, with the assistance of an AI system. (a) Acceptable auditor uses of an AI system include, but are not limited to: (i) use of an audited high-risk AI system in a controlled environment without impacts on end users for system testing purposes; or (ii) detecting patterns in the behavior of an audited AI system. (b) An auditor shall not: (i) use a different high-risk AI system that is not the subject of an audit to complete an audit; or (ii) use an AI system to draft an audit under this section without meaningful human review and oversight. 7. (a) An auditor shall be an independent entity including but not limited to an individual, non-profit, firm, corporation, partnership, cooperative, or association. (b) For the purposes of this article, no auditor may be commissioned by a developer or deployer of a high-risk AI system if such entity: (i) has already been commissioned to provide any auditing or non-auditing service, including but not limited to financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past twelve months; or (ii) is, will be, or plans to be engaged in the business of developing or deploying an AI system that can compete commercially with such developer's or deployer's high-risk AI system in the five years following an audit. (c) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result. 8. The attorney general may promulgate further rules to ensure (a) the independence of auditors under this section, and (b) that teams conducting audits incorporate feedback from communities that may foreseeably be the subject of algorithmic discrimination with respect to the AI system being audited. 9. If a developer or deployer has an audit completed for the purpose of complying with another applicable federal, state, or local law or regulation, and the audit otherwise satisfies all other requirements of this section, such audit shall be deemed to satisfy the requirements of this section.
1. Each developer or deployer of high-risk AI systems shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of such high-risk AI system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under subdivision one of section eighty-six of this article. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this section shall be reasonable considering: (a) The guidance and standards set forth in: (i) version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States department of commerce, or (ii) another substantially equivalent framework selected at the discretion of the attorney general, if such framework was designed to manage risks associated with AI systems, is nationally or internationally recognized and consensus-driven, and is at least as stringent as version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology; (b) The size and complexity of the developer or deployer; (c) The nature, scope, and intended uses of the high-risk AI system developed or deployed; and (d) The sensitivity and volume of data processed in connection with the high-risk AI system. 2. A risk management policy and program implemented pursuant to subdivision one of this section may cover multiple high-risk AI systems developed by the same developer or deployed by the same deployer if sufficient. 3. The attorney general may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subdivision one of this section in a form and manner prescribed by the attorney general. The attorney general may evaluate the risk management policy and program to ensure compliance with this section.
1. A developer shall do the following: (a) upon the reasonable request of the deployer, make available to the deployer information necessary to demonstrate the compliance of the deployer with the requirements of this article, including: (i) making available a report of the pre-deployment evaluation described in section one hundred three of this article or the annual review of assessments conducted by the developer under section one hundred four of this article; and (ii) providing information necessary to enable the deployer to conduct and document a pre-deployment evaluation under section one hundred three or an impact assessment described in section one hundred four of this article; and (b) either: (i) allow and cooperate with reasonable assessments conducted by the deployer or the deployer's designated independent auditor; or (ii) arrange for an independent auditor to conduct an assessment of the developer's policies and practices in support of the obligations under this article using an appropriate and accepted control standard or framework and assessment procedure for such assessments and provide a report of such assessment to the deployer upon request. 2. A developer may offer or license a covered algorithm to a deployer pursuant to a written contract between the developer and deployer, provided that the contract: (a) clearly sets forth the data processing procedures of the developer with respect to any collection, processing, or transfer of data performed on behalf of the deployer; (b) clearly sets forth: (i) instructions for collecting, processing, transferring, or disposing of data by the developer or deployer in the context of the use of the covered algorithm; (ii) instructions for deploying the covered algorithm as intended; (iii) the nature and purpose of any collection, processing, or transferring of data; (iv) the type of data subject to such collection, processing, or transferring; (v) the duration of such processing of data; and (vi) the rights and obligations of both parties, including a method by which the developer shall notify the deployer of material changes to its covered algorithm; (c) shall not relieve a developer or deployer of any requirement or liability imposed on such developer or deployer under this article; (d) prohibits both the developer and deployer from combining data received from or collected on behalf of the other party with data the developer or deployer received from or collected on behalf of another party; and (e) shall not prohibit a developer or deployer from raising concerns to any relevant enforcement agency with respect to the other party. 3. Each developer shall retain for a period of ten years a copy of each contract entered into with a deployer to which it provides requested products or services.
Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years.
Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model required by this section or the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure.
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any material modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
"Safety and security protocol" means documented technical and organizational protocols that: ... (e) Designate senior personnel to be responsible for ensuring compliance.
A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced pursuant to this section.
1. Developers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A developer of a high-risk AI system shall complete at least: (i) a first audit within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; and (ii) one audit every one year following the submission of the first audit. (b) A developer audit under this section shall include: (i) an evaluation and determination of whether the developer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; and (ii) an evaluation of the developer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 2. Deployers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A deployer of a high-risk AI system shall complete at least: (i) a first audit within six months after initial deployment; (ii) a second audit within one year following the submission of the first audit; and (iii) one audit every two years following the submission of the second audit. (b) A deployer audit under this section shall include: (i) an evaluation and determination of whether the deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; (ii) an evaluation of system accuracy and reliability with respect to such high-risk AI system's deployer-intended and actual use cases; and (iii) an evaluation of the deployer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 3. A deployer or developer may hire more than one auditor to fulfill the requirements of this section. 4. At the attorney general's discretion, the attorney general may: (a) promulgate further rules as necessary to ensure that audits under this section assess whether or not AI systems produce algorithmic discrimination and otherwise comply with the provisions of this article; and (b) recommend an updated AI system auditing framework to the legislature, where such recommendations are based on a standard or framework (i) designed to evaluate the risks of AI systems, and (ii) that is nationally or internationally recognized and consensus-driven, including but not limited to a relevant framework or standard created by the International Standards Organization. 5. The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer under section eighty-eight of this article. 6. An audit conducted under this section may be completed in part, but shall not be completed entirely, with the assistance of an AI system. (a) Acceptable auditor uses of an AI system include, but are not limited to: (i) use of an audited high-risk AI system in a controlled environment without impacts on end users for system testing purposes; or (ii) detecting patterns in the behavior of an audited AI system. (b) An auditor shall not: (i) use a different high-risk AI system that is not the subject of an audit to complete an audit; or (ii) use an AI system to draft an audit under this section without meaningful human review and oversight. 7. (a) An auditor shall be an independent entity including but not limited to an individual, non-profit, firm, corporation, partnership, cooperative, or association. (b) For the purposes of this article, no auditor may be commissioned by a developer or deployer of a high-risk AI system if such entity: (i) has already been commissioned to provide any auditing or non-auditing service, including but not limited to financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past twelve months; or (ii) is, will be, or plans to be engaged in the business of developing or deploying an AI system that can compete commercially with such developer's or deployer's high-risk AI system in the five years following an audit. (c) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result. 8. The attorney general may promulgate further rules to ensure (a) the independence of auditors under this section, and (b) that teams conducting audits incorporate feedback from communities that may foreseeably be the subject of algorithmic discrimination with respect to the AI system being audited. 9. If a developer or deployer has an audit completed for the purpose of complying with another applicable federal, state, or local law or regulation, and the audit otherwise satisfies all other requirements of this section, such audit shall be deemed to satisfy the requirements of this section.
1. Each developer or deployer of high-risk AI systems shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of such high-risk AI system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under subdivision one of section eighty-six of this article. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this section shall be reasonable considering: (a) The guidance and standards set forth in: (i) version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States department of commerce, or (ii) another substantially equivalent framework selected at the discretion of the attorney general, if such framework was designed to manage risks associated with AI systems, is nationally or internationally recognized and consensus-driven, and is at least as stringent as version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology; (b) The size and complexity of the developer or deployer; (c) The nature, scope, and intended uses of the high-risk AI system developed or deployed; and (d) The sensitivity and volume of data processed in connection with the high-risk AI system. 2. A risk management policy and program implemented pursuant to subdivision one of this section may cover multiple high-risk AI systems developed by the same developer or deployed by the same deployer if sufficient. 3. The attorney general may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subdivision one of this section in a form and manner prescribed by the attorney general. The attorney general may evaluate the risk management policy and program to ensure compliance with this section.
3. (a) Except as provided in subdivision five of this section, any developer that, on or after January first, two thousand twenty-seven, offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence decision system shall, to the extent feasible, make available to such deployers and other developers the documentation and information relating to such high-risk artificial intelligence decision system necessary for a deployer, or the third party contracted by a deployer, to complete an impact assessment pursuant to this article. The developer shall make such documentation and information available through artifacts such as model cards, dataset cards, or other impact assessments. (b) A developer that also serves as a deployer for any high-risk artificial intelligence decision system shall not be required to generate the documentation and information required pursuant to this section unless such high-risk artificial intelligence decision system is provided to an unaffiliated entity acting as a deployer.
2. (a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer of a high-risk artificial intelligence decision system shall implement and maintain a risk management policy and program to govern such deployer's deployment of the high-risk artificial intelligence decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer shall use to identify, document, and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy shall be the product of an iterative process, the risk management program shall be an iterative process and both the risk management policy and program shall be planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence decision system. Each risk management policy and program implemented and maintained pursuant to this subdivision shall be reasonable, considering: (i) the guidance and standards set forth in the latest version of: (A) the "Artificial Intelligence Risk Management Framework" published by the national institute of standards and technology; (B) ISO or IEC 42001 of the international organization for standardization; or (C) a nationally or internationally recognized risk management framework for artificial intelligence decision systems, other than the guidance and standards specified in clauses (A) and (B) of this subparagraph, that imposes requirements that are substantially equivalent to, and at least as stringent as, the requirements established pursuant to this section for risk management policies and programs; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence decision systems deployed by the deployer, including, but not limited to, the intended uses of such high-risk artificial intelligence decision systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence decision systems deployed by the deployer. (b) A risk management policy and program implemented and maintained pursuant to paragraph (a) of this subdivision may cover multiple high-risk artificial intelligence decision systems deployed by the deployer.
1. Beginning on January first, two thousand twenty-seven, each developer of a general-purpose artificial intelligence model shall, except as provided in subdivision two of this section: (a) create and maintain technical documentation for the general-purpose artificial intelligence model, which shall: (i) include: (A) the training and testing processes for such general-purpose artificial intelligence model; and (B) the results of an evaluation of such general-purpose artificial intelligence model performed to determine whether such general-purpose artificial intelligence model is in compliance with the provisions of this article; (ii) include, as appropriate, considering the size and risk profile of such general-purpose artificial intelligence model, at least: (A) the tasks such general-purpose artificial intelligence model is intended to perform; (B) the type and nature of artificial intelligence decision systems in which such general-purpose artificial intelligence model is intended to be integrated; (C) acceptable use policies for such general-purpose artificial intelligence model; (D) the date such general-purpose artificial intelligence model is released; (E) the methods by which such general-purpose artificial intelligence model is distributed; and (F) the modality and format of inputs and outputs for such general-purpose artificial intelligence model; and (iii) be reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such technical documentation;
(d) A developer that is exempt pursuant to subparagraph (ii) of paragraph (a) of this subdivision shall establish and maintain an artificial intelligence risk management framework, which shall: (i) be the product of an iterative process and ongoing efforts; and (ii) include, at a minimum: (A) an internal governance function; (B) a map function that shall establish the context to frame risks; (C) a risk management function; and (D) a function to measure identified risks by assessing, analyzing and tracking such risks.
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
(a) Beginning on the effective date of this article, or ninety days after a developer first qualifies as a large developer, whichever is later, a large developer shall annually retain a third party to perform an independent audit of compliance with the requirements of this section. Such third party shall conduct audits consistent with best practices. (b) The third party shall be granted access to unredacted materials as necessary to comply with the third party's obligations under this subdivision. (c) The third party shall produce a report including all of the following: (i) A detailed assessment of the large developer's steps to comply with the requirements of this section; (ii) If applicable, any identified instances of noncompliance with the requirements of this section, and any recommendations for how the developer can improve its policies and processes for ensuring compliance with the requirements of this section; (iii) A detailed assessment of the large developer's internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the large developer, its employees, and its contractors; and (iv) The signature of the lead auditor certifying the results of the audit. (d) The large developer shall retain an unredacted copy of the report for as long as a frontier model is deployed plus five years. (e) (i) The large developer shall conspicuously publish a copy of the third party's report with appropriate redactions and transmit a copy of such redacted report to the division of homeland security and emergency services. (ii) The large developer shall grant the division of homeland security and emergency services or the attorney general access to the third party's report, with redactions only to the extent required by federal law, upon request.
D. All documentation shall comply with state and federal medical record-keeping requirements and be accessible for regulatory review. Documentation of relevant instances where a qualified end-user overrides or disagrees with AI device-generated outputs must be maintained through a summary report indicating the frequency and nature of overrides. Deployers shall document the percentage or number of such overrides or disagreements.
A. Deployers of any artificial intelligence (AI) device shall establish an AI governance group with representation from qualified end-users. This governance group is responsible for overseeing compliance with this act.
B. Deployers shall maintain an updated inventory of deployed AI devices, with device instructions for use and any relevant safety and effectiveness documentation made accessible to all qualified end-users of the device.
E. Deployers shall document the use case and user training procedure for the AI device.
The department shall establish a record retention policy and determine the amount of time a facility shall retain records related to artificial-intelligence algorithms. The department may request input from facilities and health care providers or their representatives in making the determination under this section.
The department shall establish a record retention policy and determine the amount of time an insurer shall retain records. The department may request input from insurers or their representatives in making this determination.
The department shall establish a record retention policy and determine the amount of time an MA or CHIP managed care plan shall retain records. The department may request input from an MA or CHIP managed care plan or their representative to make this determination.
(d) Documentation.--A supplier shall maintain documentation regarding the development and implementation of the chatbot that describes: (1) Foundation models used in development. (2) Training data used. (3) Compliance with Federal and State privacy law. (4) Consumer data collection and sharing practices. (5) Ongoing efforts to ensure accuracy, reliability, fairness and safety.
The department shall establish a record retention policy and determine the amount of time a facility shall retain records related to artificial-intelligence algorithms. The department may request input from facilities and health care providers or their representatives in making the determination under this section.
The department shall establish a record retention policy and determine the amount of time an insurer shall retain records. The department may request input from insurers or their representatives in making this determination.
The department shall establish a record retention policy and determine the amount of time an MA or CHIP managed care plan shall retain records. The department may request input from an MA or CHIP managed care plan or their representative to make this determination.
Insurers shall maintain documentation of artificial intelligence decisions for at least five (5) years including adverse benefit determinations where artificial intelligence made, or was a substantial factor in making, the adverse benefit determination.
(d) An employer shall establish, maintain, and preserve for five (5) years contemporaneous, true, and accurate records of data gathered through the use of an electronic monitoring tool and used in a hiring, promotion, termination, disciplinary or compensation decision to ensure compliance with the employee or their authorized representative or the department requests for data. The employer shall destroy any employee information collected via an electronic monitoring tool no later than sixty-one (61) months after collection unless the employee has provided written and informed consent to the retention of their data by the employer. An employer shall establish, implement and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of employee data, appropriate to the volume and nature of the employee data at issue. An employee shall have the right to request corrections to erroneous employee data.
Insurers shall maintain documentation of artificial intelligence decisions for at least five (5) years including adverse benefit determinations where artificial intelligence made, or was a substantial factor in making, the adverse benefit determination.
(d) An employer shall establish, maintain, and preserve for five (5) years contemporaneous, true, and accurate records of data gathered through the use of an electronic monitoring tool and used in a hiring, promotion, termination, disciplinary or compensation decision to ensure compliance with the employee or their authorized representative or the department requests for data. The employer shall destroy any employee information collected via an electronic monitoring tool no later than sixty-one (61) months after collection unless the employee has provided written and informed consent to the retention of their data by the employer. An employer shall establish, implement and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of employee data, appropriate to the volume and nature of the employee data at issue. An employee shall have the right to request corrections to erroneous employee data.
(D) A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program must be written and made publicly available on the chatbot provider's website.
(D) A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program must be written and made publicly available on the chatbot provider's website.
(B)(1) Except as provided in subsection (F), a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable considering: (a)(i) The guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (ii) any risk management framework for artificial intelligence systems that the Attorney General, in his discretion, may designate; (b) the size and complexity of the deployer; (c) the nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and (d) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer. (2) A risk management policy and program implemented pursuant to item (1) may cover multiple high-risk artificial intelligence systems deployed by the deployer.
(2) A participant shall: (a) provide required information to state agencies in accordance with the terms of the participation agreement; and (b) report to the office as required in the participation agreement. ... (4) A participant shall retain records as required by office rule or the participation agreement.
(4) A regulatory mitigation agreement between a participant and the office and relevant agencies shall specify: (a) limitations on scope of the use of the participant's artificial intelligence technology, including: (i) the number and types of users; (ii) geographic limitations; and (iii) other limitations to implementation; (b) safeguards to be implemented; and (c) any regulatory mitigation granted to the applicant. ... (6) A participant remains subject to all legal and regulatory requirements not expressly waived or modified by the terms of the regulatory mitigation agreement.
E. The first draft of any report or record created in whole or in part by using generative artificial intelligence shall be retained for as long as the final report is retained. The program used to generate a draft or final report shall maintain an audit trail that, at a minimum, identifies (i) the person who used artificial intelligence to create or edit the report; (ii) any changes made to the report following the initial draft; and (iii) the video and audio footage used to create a report, if any.
Each carrier shall (iii) maintain documentation of AI decisions for at least three years;
(a) Each developer or deployer of automated decision systems used in consequential decisions shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of the automated decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under section 4193b of this title. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of an automated decision system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this subsection shall be reasonable considering the: (1) guidance and standards set forth in version 1.0 of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology in the U.S. Department of Commerce, or the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology if, in the Attorney General's discretion, the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology in the U.S. Department of Commerce is at least as stringent as version 1.0; (2) size and complexity of the developer or deployer; (3) nature, scope, and intended uses of the automated decision system developed or deployed for use in consequential decisions; and (4) sensitivity and volume of data processed in connection with the automated decision system. (b) A risk management policy and program implemented pursuant to subsection (a) of this section may cover multiple automated decision systems developed by the same developer or deployed by the same deployer for use in consequential decisions if sufficient.
(b) No deployer shall deploy an inherently dangerous artificial intelligence system or an artificial intelligence system that creates reasonably foreseeable risks pursuant to section 4193f of this subchapter unless the deployer has designed and implemented a risk management policy and program for the model or system. The risk management policy shall specify the principles, processes, and personnel that the deployer shall use in maintaining the risk management program to identify, mitigate, and document any risk that is a reasonably foreseeable consequence of deploying or using the system. Each risk management policy and program designed, implemented, and maintained pursuant to this subsection shall be: (1) at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the NIST; and (2) reasonable considering: (A) the size and complexity of the deployer; (B) the nature and scope of the system, including the intended uses and unintended uses and the modifications made to the system by the deployer; and (C) the data that the system, once deployed, processes as inputs.
(d) Data security program. A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of the personal data and chat logs maintained by the chatbot provider. The program shall be written and made publicly available on the chatbot provider's website.
(a) It is an affirmative defense to liability in an action for unlawful or unprofessional conduct brought against a supplier by the Office of Professional Regulation or the Board of Medical Practice if the supplier demonstrates that the supplier meets all of the following conditions: (1) the supplier created, maintained, and implemented a policy that meets the requirements of subsection (b) of this section; (2) the supplier maintains documentation regarding the development and implementation of the mental health chatbot that describes: (A) foundation models used in development; (B) training tools used; (C) compliance with federal health privacy regulations; (D) user data collection and sharing practices; and (E) ongoing efforts to ensure accuracy, reliability, fairness, and safety; (3) the supplier filed the policy with the Office of the Attorney General; and (4) the supplier complied with all requirements of the filed policy at the time of the alleged violation. (b) A policy described in subdivision (a)(1) of this section shall meet all of the following requirements: (1) be in writing; (2) clearly state: (A) the intended purposes of the mental health chatbot; and (B) the abilities and limitations of the mental health chatbot; (3) describe the procedures by which the supplier: (A) ensures that qualified mental health providers licensed in Vermont or in one or more other states, or both, are involved in the development and review process; (B) ensures that the mental health chatbot is developed and monitored in a manner consistent with clinical best practices; (C) conducts testing prior to making the mental health chatbot publicly available and regularly thereafter to ensure that the output of the mental health chatbot poses no greater risk to a user than that posed to an individual in psychotherapy with a licensed mental health provider; (D) identifies reasonably foreseeable adverse outcomes to and potentially harmful interactions with users that could result from using the mental health chatbot; (E) provides a mechanism for a user to report any potentially harmful interactions from use of the mental health chatbot; (F) implements protocols to assess and respond to risk of harm to users or other individuals; (G) details actions taken to prevent or mitigate any such adverse outcomes or potentially harmful interactions; (H) implements protocols to respond in real time to acute risk of physical harm; (I) reasonably ensures regular, objective reviews of safety, accuracy, and efficacy, which may include internal or external audits; (J) provides users any necessary instructions on the safe use of the mental health chatbot; (K) ensures users understand that they are interacting with artificial intelligence; (L) ensures users understand the intended purpose, capabilities, and limitations of the mental health chatbot; (M) prioritizes user mental health and safety over engagement metrics or profit; (N) implements measures to prevent discriminatory treatment of users; and (O) ensures compliance with the security and privacy protections of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A, C, and E, as if the supplier were a covered entity, and applicable consumer protection requirements, including sections 9761-9763 of this subchapter.
(5) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
(6) For a disclosure required pursuant to this section, a developer shall, no later than 90 days after the developer performs an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
(2)(a) A deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has designed and implemented a risk management policy and program for such high-risk artificial intelligence system. The risk management policy must specify the principles, processes, and personnel that the deployer must use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk artificial intelligence system to make a consequential decision. (b) A risk management policy and program designed, implemented, and maintained pursuant to this section is presumed to be in conformity with related requirements set out in this section if the policy and program align with the guidance and standards set forth in the latest version of: (i) The artificial intelligence risk management framework published by the national institute of standards and technology; (ii) Standard ISO/IEC 42001 of the international organization for standardization; or (iii) A nationally or internationally recognized risk management framework for artificial intelligence systems with requirements that are substantially equivalent to, and at least as stringent as, the guidance and standards described in (b)(i) and (ii) of this subsection (2). (c) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
(7) For a disclosure required pursuant to this section, each deployer shall, no later than 30 days after the deployer is notified by the developer that the developer has performed an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
(1) Beginning July 1, 2027, and except as provided in section 5(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the deployer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer. (3) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
(6) For a disclosure required pursuant to this section, a developer shall, no later than 90 days after the developer performs an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
(2)(a) A deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has designed and implemented a risk management policy and program for such high-risk artificial intelligence system. The risk management policy must specify the principles, processes, and personnel that the deployer must use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk artificial intelligence system to make a consequential decision. (b) A risk management policy and program designed, implemented, and maintained pursuant to this section is presumed to be in conformity with related requirements set out in this section if the policy and program align with the guidance and standards set forth in the latest version of: (i) The artificial intelligence risk management framework published by the national institute of standards and technology; (ii) Standard ISO/IEC 42001 of the international organization for standardization; or (iii) A nationally or internationally recognized risk management framework for artificial intelligence systems with requirements that are substantially equivalent to, and at least as stringent as, the guidance and standards described in (b)(i) and (ii) of this subsection (2). (c) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
(7) For a disclosure required pursuant to this section, each deployer shall, no later than 30 days after the deployer is notified by the developer that the developer has performed an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate. (8) A deployer who performs an intentional and substantial modification to a high-risk artificial intelligence system shall comply with the documentation and disclosure requirements for developers pursuant to section 2 of this act.
(1) Beginning July 1, 2027, and except as provided in section 6(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the deployer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer.
(1) Beginning July 1, 2027, and except as provided in section 6(6) of this act, each developer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the developer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the life cycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the developer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the developer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the developer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer. (3) A developer that also serves as a deployer for any high-risk artificial intelligence system may not be required to generate the documentation required by this section unless such high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer or as otherwise required by law. (5) This section does not apply to a developer with fewer than 50 full-time equivalent employees.
(a) A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within three years of the individual's last interaction with the private entity, whichever occurs first. Absent a valid warrant or subpoena issued by a court of competent jurisdiction, a private entity in possession of biometric identifiers or biometric information must comply with its established retention schedule and destruction guidelines.