Developers or deployers of certain AI systems must submit documentation — including system descriptions, risk assessments, and safety evaluation results — to regulatory authorities either proactively on a defined schedule or in response to regulatory requests. Proactive submission requirements cannot be satisfied by waiting to be asked.
(2) An insurer shall certify annually to the department that the artificial intelligence used to make determinations on requests for prior authorization complies with all of the following: a. Does not rely solely on a group dataset to make determinations. b. Is configured and applied in a fair manner for each subscriber group and enrollee such that resulting determinations are consistent for enrollees who present with similar clinical considerations. c. Does not discriminate directly or indirectly against any subscriber group or enrollee in violation of state or federal law, including any regulation or guidance issued by the federal Department of Health and Human Services.
(2) Certify annually to the department that: (i) use of artificial intelligence and the outcomes that it generates are reviewed on a periodic basis to maximize accuracy and reliability; and (ii) use of artificial intelligence in utilization review complies with the requirements of subsection (b).
(a) On or before January 1, 2028, the Attorney General shall do all of the following: (1) Adopt regulations that include, at a minimum, all of the following: (A) Professional and ethical standards for auditors that ensure independence. (B) Eligibility requirements for auditors. (C) Procedures for auditors to assess compliance with this chapter. (D) Requirements for AI child safety audit reports. (2) Establish a public incident reporting mechanism for consumers to submit complaints relating to companion chatbots to the Attorney General. (3) Establish a process for qualified researchers to access anonymized and aggregated audit data for academic study of child safety in companion chatbots. (b) Beginning January 1, 2028, the Attorney General shall issue an annual public report that includes the following: (1) A high-level summary of each child safety audit report. (2) The total number of child safety audits conducted. (3) Common findings and trends across the companion chatbot industry. (4) Emerging child safety risks identified through audit reviews. (5) Best practices and effective mitigation strategies observed. (6) Aggregated data on compliance rates and common deficiencies. (7) Recommendations for operators, parents, and policymakers.
(a) (1) A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter. (2) Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential.
(d) A large frontier developer shall transmit to the Office of Emergency Services a summary of any assessment of catastrophic risk resulting from internal use of its frontier models every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicated in writing to the Office of Emergency Services with written updates, as appropriate.
A large frontier developer shall transmit to the Office of Emergency Services a summary of any assessment of catastrophic risk resulting from internal use of its frontier models every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicated in writing to the Office of Emergency Services with written updates, as appropriate.
(4) A PERSON DESCRIBED IN SUBSECTION (2) OF THIS SECTION SHALL PROVIDE WRITTEN DISCLOSURES TO THE DIVISION, THE DEPARTMENT OF HUMAN SERVICES, OR THE DEPARTMENT OF HEALTH CARE POLICY AND FINANCING, AS APPLICABLE, THAT IDENTIFY: (a) THE UTILIZATION REVIEW FUNCTIONS FOR WHICH THE ARTIFICIAL INTELLIGENCE SYSTEM WILL BE USED; (b) THE POINTS IN THE UTILIZATION REVIEW PROCESS WHEN THE ARTIFICIAL INTELLIGENCE SYSTEM IS USED; (c) THE HUMAN OVERSIGHT PROCESS, INCLUDING THE QUALIFICATIONS OF THE REVIEWER AND WHETHER THE A HUMAN MUST APPROVE AN ADVERSE DETERMINATION; AND (d) THE PROCESS FOR MAINTAINING AUDIT INFORMATION SUFFICIENT TO DEMONSTRATE COMPLIANCE WITH SUBSECTION (3) OF THIS SECTION.
(5) On and after June 30, 2026, a developer of a high-risk artificial intelligence system shall disclose to the attorney general, in a form and manner prescribed by the attorney general, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which:
(7) On and after June 30, 2026, the attorney general may require that a developer disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the statement or documentation described in subsection (2) of this section. The attorney general may evaluate such statement or documentation to ensure compliance with this part 17, and the statement or documentation is not subject to disclosure under the "Colorado Open Records Act", part 2 of article 72 of title 24. In a disclosure made pursuant to this subsection (7), a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
(9) On and after June 30, 2026, the attorney general may require that a deployer, or a third party contracted by the deployer, disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subsection (2) of this section, the impact assessment completed pursuant to subsection (3) of this section, or the records maintained pursuant to subsection (3)(f) of this section. The attorney general may evaluate such risk management policy, impact assessment, or records to ensure compliance with this part 17, and the risk management policy, impact assessment, and records are not subject to disclosure under the "Colorado Open Records Act", part 2 of article 72 of title 24. In a disclosure made pursuant to this subsection (9), a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Except as provided in subsection (f) of this Code section, a developer of an automated decision system shall provide certain information regarding such automated decision system to the Attorney General, in a form and manner prescribed by the Attorney General. Such information shall include, at a minimum: (1) A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the automated decision system; (2) Documentation disclosing: (A) The purpose of the automated decision system; (B) The intended benefits and uses of the automated decision system; (C) High-level summaries of the types of data used to train the automated decision system; (D) Known or reasonably foreseeable limitations of the automated decision system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the automated decision system; (E) The measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination; (F) How the automated decision system was evaluated for performance and mitigation of algorithmic discrimination before the automated decision system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (G) The data governance measures used to cover the training data sets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (H) How the automated decision system should be used, not be used, and be monitored by an individual when the automated decision system is used to make, or assist in making, a consequential decision; and (I) All other information necessary to allow the deployer to comply with the requirements of Code Section 10-16-3; and (3) Any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitoring the performance of the automated decision system for risks of algorithmic discrimination.
The Attorney General may require that a developer disclose to the Attorney General, within seven days and in a form and manner prescribed by the Attorney General, any documentation or records required by this Code section, including, but not limited to, the statement or documentation described in subsection (b) of this Code section. The Attorney General may evaluate such statement or documentation to ensure compliance with this chapter, and, notwithstanding the provisions of Article 4 of Chapter 18 of Title 50, relating to open records, such records shall not be open to inspection by or made available to the public. In a disclosure pursuant to this subsection, a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
The Attorney General may require that a deployer, or a third party contracted by the deployer, disclose to the Attorney General, no later than seven days after and in a form and manner prescribed by the Attorney General, any documentation or records required by this chapter. The Attorney General may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and such records, notwithstanding the provisions of Article 4 of Chapter 18 of Title 50, relating to open records, shall not be open to inspection by or made available to the public. In a disclosure pursuant to this Code section, a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records is subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
(a) The Department's regulatory oversight of health insurance coverage includes oversight of the use of AI systems or predictive models to make or support adverse consumer outcomes. The Department's authority in an investigation or market conduct action includes review regarding the development, implementation, and use of AI systems or predictive models and the outcomes from the use of those AI systems or predictive models. The Department may also request other information or documentation relevant to an investigation or market conduct action, and a health insurance issuer or any other person described in subsection (b) of Section 132 of the Illinois Insurance Code must comply with that request. The Department's inquiries may include, but are not limited to, questions regarding any specific model, AI system, or application of a model or AI system. The Department may also make requests for information and documentation relating to AI systems governance, risk management, and use protocols; information and documentation relating to the health insurance issuer's preacquisition and preutilization diligence, monitoring, and auditing of data or AI systems developed or used by a third party; and information and documentation relating to implementation and compliance with the health insurance issuer's AI systems program.
(a) Each impact assessment conducted by a State agency under this Act shall be submitted to the Governor and the General Assembly at least 30 days prior to implementation of the automated decision-making system that is the subject of the assessment. Each impact assessment conducted by any other public body under this Act shall be submitted to the director of the public body or the executive officers or primary administrator of the relevant governing body at least 30 days prior to implementation of the automated decision-making system that is the subject of the assessment. (b) If the employer determines that disclosure of any information in the impact assessment would result in a substantial negative impact on public health or safety, infringe upon privacy rights, or significantly impair the employer's ability to protect its information technology or operational assets, the information may be redacted, if an explanatory statement describing the determination process for redaction is published along with the redacted assessment. (c) If the impact assessment covers technology used to prevent, detect, protect against, or respond to security incidents, identity theft, fraud, harassment, or other illegal activity, the employer may redact related information, if an explanatory statement describing the determination process for redaction is published along with the redacted assessment.
(a) The Department's regulatory oversight of insurers includes oversight of an insurer's use of AI systems to make or support adverse determinations that affect consumers. Any insurer authorized to operate in the State is subject to review by the Department in an investigation or market conduct action regarding the development, implementation, and use of AI systems or predictive models and the outcomes from the use of those AI systems or predictive models. The Department may also request other information or documentation relevant to an investigation or market conduct action, and an insurer must comply with that request. The Department's inquiries may include, but are not limited to, questions regarding any specific model, AI system, or application of a model or AI system. The Department may also make requests for information and documentation relating to AI systems governance, risk management, and use protocols; information and documentation relating to the insurer's preacquisition and preutilization diligence, monitoring, and auditing of data or AI systems developed by a third party; and information and documentation relating to implementation and compliance with the insurer's AI systems program.
Sec. 15. (a) The department may do the following: (1) Receive complaints regarding alleged violations of this chapter. (2) Investigate any facts, conditions, practices, or matters as the department deems necessary or appropriate to determine whether an employer has violated this chapter. (3) Require an employer to file with the department, on a form prescribed by the department, annual or special reports or answers in writing to specific questions relating to the use of an automated decision system for employment related decisions. (b) If the department requires an employer to file a report or answers under subsection (a)(3), the employer shall file the report or answers in the manner and time period required by the department. (c) An employer shall maintain, keep, preserve, and make available to the department records pertaining to compliance with this chapter.
(g) Not later than 6 months after the effective date of this act, the attorney general may require that a developer disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the statement or documentation described in subsection (b) of this section. The attorney general may evaluate such statement or documentation to ensure compliance with this chapter, and the statement or documentation is not subject to disclosure under the "Massachusetts Public Records Law", chapter 66, section 10 of the General Laws. In a disclosure pursuant to this subsection (g), a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
(i) Not later than 6 months after the effective date of this act, the attorney general may require that a deployer, or a third party contracted by the deployer, disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subsection (b) of this section, the impact assessment completed pursuant to subsection (c) of this section, or the records maintained pursuant to subsection (c)(6) of this section. The attorney general may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and records are not subject to disclosure under the "Massachusetts Public Records Law", chapter 66, section 10 of the General Laws. In a disclosure pursuant to this subsection (i), a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records include information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
9. THE TOTAL NUMBER OF GRIEVANCES REVIEWED UNDER § 15–10A–02(B)(2)(VI) OF THIS SUBTITLE AND AGGREGATED BY: A. TYPE OF CLAIM; B. RACE, GENDER, AND PROFESSION OF MEMBER; AND C. TYPE OF POLICY, INCLUDING INDIVIDUAL, SMALL GROUP, OR LARGE GROUP AND WHETHER THE POLICY WAS PURCHASED ON THE HEALTH BENEFIT EXCHANGE; AND
6. the number of adverse decisions issued by the carrier under § 15–10A–02(f) of this subtitle, whether the adverse decision involved a prior authorization or step therapy protocol, the type of service at issue in the adverse decisions, and whether an artificial intelligence, algorithm, or other software tool was used in making the adverse decision;
(c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request.
(c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request.
(a) No person shall operate or distribute a chatbot that deals substantially with health information without first obtaining a health information chatbot license. (b) An application for a health information chatbot license shall include all of the following: (1) Detailed documentation of the chatbot's: a. Technical architecture and operational specifications. b. Data collection, processing, storage, and deletion practices. c. Security measures and protocols. d. Privacy protection mechanisms. (2) Quality control and testing procedures. (3) Risk assessment and mitigation strategies. (4) Evidence of compliance with applicable federal and state regulations. (5) Proof of insurance coverage. (6) Required application fees. (7) Any additional information required by the Department.
(b) The Attorney General shall designate a Director, officers, and employees assigned to the oversight and enforcement of this Chapter. Upon presenting appropriate credentials and a written notice to the owner, operator, or agent in charge, those officers and employees are authorized to enter, at reasonable times, any factory, warehouse, or establishment in which chatbots licensed under this Chapter are manufactured, processed, or held, and to inspect, in a reasonable manner and within reasonable limits and in a reasonable time. In addition to physical inspections, the Department may conduct digital inspections of licensed chatbots under this Chapter, to include the following: (1) Examination of source code, algorithms, and machine learning models. (2) Review of data processing and storage practices. (3) Evaluation of cybersecurity measures and protocols. (4) Assessment of user data privacy protections. (5) Testing of chatbot responses and behaviors in various scenarios. (6) Audit of data collection, use, and retention practices. (7) Inspection of software development and update processes. (8) Review of remote access and monitoring capabilities. (9) Evaluation of integration with other digital health technologies or platforms. (c) As part of any inspection, whether physical or digital, the Director may require access to all records relating to the development, testing, validation, production, distribution, and performance of a chatbot licensed under this Chapter. (d) Any information obtained during an inspection which falls within the definition of a trade secret or confidential commercial information as defined in 21 CFR 20.61 shall be treated as confidential and shall not be disclosed under Chapter 132 of the General Statutes, except as may be necessary in proceedings under this Chapter or other applicable law. (e) Following any inspection, the Director shall provide a detailed report of findings to the manufacturer or importer, including any identified deficiencies and required corrective actions. (f) Every person who is a manufacturer or importer of a licensed chatbot under this Chapter shall establish and maintain such records, and make such reports to the Director, as the Director may by regulation reasonably require to assure the safety and effectiveness of such devices.
(5) The Attorney General shall establish a mechanism to be used by a large frontier developer to confidentially submit summaries of any assessments of the potential for catastrophic risk resulting from internal use of its frontier models. (6) A large frontier developer shall transmit to the Attorney General a summary of any assessment of catastrophic risk resulting from internal use of its frontier models no less frequently than every three months.
(7)(a) On and after February 1, 2026, the Attorney General may provide a written demand to any developer to disclose to the Attorney General the statement or documentation described in subsection (2) of this section if such a statement or documentation is relevant to an investigation related to the developer conducted by the Attorney General. Such statement or documentation shall be provided to the Attorney General in a form and manner prescribed by the Attorney General. (b) The Attorney General may evaluate such statement or documentation, if it is relevant to an investigation conducted by the Attorney General regarding a violation of the Artificial Intelligence Consumer Protection Act, to ensure compliance with the Artificial Intelligence Consumer Protection Act. (c) In any disclosure pursuant to this subsection, any developer may designate the statement or documentation as including proprietary information or a trade secret. (d) To the extent any such statement or documentation includes any proprietary information or any trade secret, such statement or documentation shall be exempt from disclosure.
(8)(a) On and after February 1, 2026, in connection with an ongoing investigation related to the deployer, the Attorney General may require any deployer or third party contracted by a deployer to disclose any of the following to the Attorney General no later than ninety days after such request in a form and manner prescribed by the Attorney General: (i) The risk management policy implemented pursuant to subsection (2) of this section; (ii) The impact assessment completed pursuant to subsection (3) of this section; or (iii) The records maintained pursuant to subdivision (3)(f) of this section. (b) If such risk management policy, impact assessment, or record is relevant to an investigation conducted by the Attorney General regarding a violation of the Artificial Intelligence Consumer Protection Act, the Attorney General may evaluate the risk management policy, impact assessment, or records disclosed pursuant to subdivision (a) of this subsection to ensure compliance with the Artificial Intelligence Consumer Protection Act. (c) Any disclosure under this subsection shall not be a public record subject to disclosure pursuant to sections 84-712 to 84-712.09. (d) A deployer may designate any statement or documentation disclosed under this subsection as including proprietary information or a trade secret.
An artificial intelligence company shall annually subject all artificial intelligence technology sold, developed, deployed, used, or offered for sale in this State to a safety test that adheres to the requirements established pursuant to subsection b. of this section and submit a report to the Office of Information Technology containing: (1) a list of all artificial intelligence technologies tested; (2) a description of each safety test conducted, including the safety test's adherence to the requirements established pursuant to subsection b. of this section; (3) a list of all third parties used to conduct safety tests, if any; and (4) the results of each safety test administered.
(6) develop an AI Impact Disclosure that employers deploying AI systems that results in layoffs shall file with the department. This disclosure shall contain, at a minimum, the date on which the AI tool that resulted in layoffs was deployed, the date of layoffs, and the number of workers displaced by the AI tool deployment; and (7) develop a supplemental contribution schedule to the AI Horizon Fund based on the number of layoffs attributable to AI and develop a mechanism for assessment and payment of these assessments. b. The disclosure statements and supplemental contributions specified in paragraphs (6) and (7) of subsection a. of this section shall only be applicable to firms which have 100 or more employees.
Each AI infrastructure entity shall, at the time of initial deployment and annually thereafter, in a manner determined by the department: a. Conduct an environmental impact assessment and provide an additional environmental impact assessment with any capacity expansion, and file the assessment with the department; b. Submit annual reports to the department detailing energy consumption, water usage, and carbon emissions;
d. An employer that uses an artificial intelligence analysis of a video interview to determine whether an applicant will be selected for an in-person interview shall collect and report the following demographic data: (1) the race and ethnicity of applicants who are and are not afforded the opportunity for an in-person interview after the use of artificial intelligence analysis; and (2) the race and ethnicity of applicants who are offered a position or hired. e. The demographic data collected under subsection d. of this section shall be reported annually to the Department of Labor and Workforce Development.
Each AI infrastructure entity shall, at the time of initial deployment and annually thereafter, in a manner determined by the department: a. Conduct an environmental impact assessment and provide an additional environmental impact assessment with any capacity expansion, and file the assessment with the department; b. Submit annual reports to the department detailing energy consumption, water usage, and carbon emissions; and c. Enter into community benefit agreements with affected municipalities, and file the agreement with the department.
(a) Except as provided in subdivision five of this section, any developer that, on or after January first, two thousand twenty-seven, offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence decision system shall, to the extent feasible, make available to such deployers and other developers the documentation and information relating to such high-risk artificial intelligence decision system necessary for a deployer, or the third party contracted by a deployer, to complete an impact assessment pursuant to this article. The developer shall make such documentation and information available through artifacts such as model cards, dataset cards, or other impact assessments. (b) A developer that also serves as a deployer for any high-risk artificial intelligence decision system shall not be required to generate the documentation and information required pursuant to this section unless such high-risk artificial intelligence decision system is provided to an unaffiliated entity acting as a deployer.
Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general and in a form and manner prescribed by the attorney general, the general statement or documentation described in subdivision two of this section. The attorney general may evaluate such general statement or documentation to ensure compliance with the provisions of this section. In disclosing such general statement or documentation to the attorney general pursuant to this subdivision, the developer may designate such general statement or documentation as including any information that is exempt from disclosure pursuant to subdivision five of this section or article six of the public officers law. To the extent such general statement or documentation includes such information, such general statement or documentation shall be exempt from disclosure. To the extent any information contained in such general statement or documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Beginning on January first, two thousand twenty-seven, the attorney general may require that a deployer, or a third party contracted by the deployer pursuant to subdivision three of this section, as applicable, disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general, and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subdivision two of this section, the impact assessment completed pursuant to subdivision three of this section; or records maintained pursuant to paragraph (e) of subdivision three of this section. The attorney general may evaluate such risk management policy, impact assessment or records to ensure compliance with the provisions of this section. In disclosing such risk management policy, impact assessment or records to the attorney general pursuant to this subdivision, the deployer or third-party contractor, as applicable, may designate such risk management policy, impact assessment or records as including any information that is exempt from disclosure pursuant to subdivision eight of this section or article six of the public officers law. To the extent such risk management policy, impact assessment, or records include such information, such risk management policy, impact assessment, or records shall be exempt from disclosure. To the extent any information contained in such risk management policy, impact assessment, or record is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general and in a form and manner prescribed by the attorney general, any documentation maintained pursuant to this section. The attorney general may evaluate such documentation to ensure compliance with the provisions of this section. In disclosing any documentation to the attorney general pursuant to this subdivision, the developer may designate such documentation as including any information that is exempt from disclosure pursuant to subdivision three of this section or article six of the public officers law. To the extent such documentation includes such information, such documentation shall be exempt from disclosure. To the extent any information contained in such documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
§ 510. Duty to register a high-risk advanced artificial intelligence system. 1. Any person who develops a high-risk advanced artificial intelligence system, whether in whole or in part, in the state that is presently performing functions for its intended purpose or within its designated operational parameters, shall have the duty to disclose the existence and function of said system to the secretary by applying for a license as required under section five hundred eleven of this article or, where applicable, a supplemental license under section five hundred twelve of this article. This duty to disclose shall be triggered by the system's active deployment and usage in its intended context or field of operation and is applicable irrespective of the system's location of operation. This duty extends to any updates, modifications, upgrades, or expansions of the system's capabilities or intended uses. 2. Any person developing a system as defined in paragraph (i) of subdivision two of section five hundred one of this article within the state shall disclose in writing to the secretary the development of such a system prior to active development of the system. Such writing shall set forth the names and addresses of all persons involved in the development of such system, a description of the system, the systems functions and intended use cases, and measures that will be taken to ensure that any risks posed by the system are mitigated. The secretary may, upon receipt of such writing, require such person to cease development of such a system where, in the secretary's discretion, the secretary believes the system has a high likelihood of violating section five hundred twenty-nine or section five hundred thirty of this article. 3. The duties set forth in this section shall apply only to advanced artificial intelligence systems that more likely than not fall under the definition of high-risk advanced artificial intelligence system as defined in section five hundred one of this article. The secretary shall send notice to any system that is presently performing functions for its intended purpose or within its designated operational parameters which, in their discretion, may fall under the definition of high-risk advanced artificial intelligence systems but that has not registered with the secretary. In the notice, the secretary may require the creators of the system to cease development and access by private individuals or the general public, pending review. Such notice shall be binding and have the effect of law. Determinations that a system is a high-risk advanced artificial intelligence system shall be made in a hearing held pursuant to the provisions of section five hundred nine of this article. In such hearing, the administrator of such hearing shall accept comments from the public. Such hearing shall, to the extent practicable, not disclose any proprietary information concerning the advanced artificial intelligence system to the public.
§ 513. Application for licenses. 1. An application for a license required under this article shall be in writing, under oath, and in the form prescribed by the secretary, and shall contain the following: (a) the exact name and address of the applicant, and if the applicant be a co-partnership or association, the names of the members thereof, and if a corporation the date and place of its incorporation; (b) the name and the business and residential address of each member of the ethics and risk management board, each principal, and officer of the applicant; and (c) the description of all known general use cases of the advanced artificial intelligence system, including any purposes foreseen to be implemented by the applicant. A "use case" shall be defined as broad category of potential use. 2. After the filing of an application for a license accompanied by payment of the fees for license and investigation, it shall be substantively reviewed. After the application is deemed sufficient and complete, the secretary shall issue the license, or the secretary may refuse to issue the license if the secretary shall find that the ethics, experience, character and general fitness of the applicant or any person associated with the applicant are not such as to command the confidence of the community and to warrant the belief that the business will be conducted honestly, fairly and efficiently within the purposes and intent of this article. 3. If the secretary refuses to issue a license, the secretary shall notify the applicant of the denial, return to the applicant the sum paid as a license fee, but retain the investigation fee to cover the costs of investigating the applicant. 4. Each license issued pursuant to this article shall remain in full force unless it is surrendered by the licensee, revoked or suspended.
4. Annually, the ethics and risk management board of each operator shall submit to the secretary a comprehensive report for each licensed high-risk advanced artificial intelligence system which consists of the following: (a) All possible use cases, whether intended or unintended, whether likely or unlikely. (b) A thorough risk assessment for each use case, considering and evaluating the potential for harm, irrespective of the probability of such risk materializing. This shall include, but not be limited to, the system's potential impact on privacy, security, fairness, economic implications, societal well-being, and safety of persons and the environment. (c) A detailed evaluation of known use cases of the system by users, exploring whether certain applications ought to be constrained or banned due to ethical considerations. This shall include an assessment of the operator's capacity to impose such constraints on use cases. (d) A mitigation plan for each identified risk, including preemptive measures, monitoring processes, and responsive actions. This shall also include a communication strategy to inform users and stakeholders about potential risks and steps taken to mitigate them. (e) A comprehensive review of any incidents or failures of the system in the past year, detailing the circumstances, impacts, measures taken to address the issue, and modifications made to prevent such incidents in the future. (f) Any existing attempts to educate users and, based on the existing use of the system by users, a detailed plan on how the operator intends to inform and instruct users on the safe and ethical use of the system, considering varying levels of digital literacy among users. (g) A disclosure of any conflicts of interest within the ethics board, which could potentially influence the board's decisions and recommendations. This shall include measures to manage and resolve such conflicts. (h) An update on the measures taken by the operator to ensure the system's adherence to existing laws, regulations, and ethical guidelines related to artificial intelligence.
§ 519. Source code modifications, updates, upgrades, and rewrites. 1. Where a licensee intends to modify or upgrade the source code of their high-risk advanced artificial intelligence system, such licensee shall be required to inform the secretary of such modification or upgrade and shall be prohibited from implementing such modification or upgrade in an accessible version of the system without express consent of the secretary in writing. This section shall not apply to source code updates. 2. A licensee shall, in writing to the secretary, set forth the purpose of the modification or upgrade, the new functions added to the system or the functions modified, the reason for the modification or upgrade, and an assessment of new risks or risks that may be more probable as a result of the modification or upgrade. The secretary shall, upon receipt of notice, have thirty business days to provide the licensee with approval of the modification or upgrade. Where approval is not received within thirty business days, absent an extension in writing which shall not exceed thirty additional business days, the modification or upgrade shall be deemed approved. Nothing in this subdivision shall be construed as limiting the ability of the secretary to take any action they are authorized to take in relation to the approved modification or upgrade. Where the secretary rejects the modification or upgrade, the secretary shall set forth in writing the reasons for the rejection and steps that the licensee can take to receive approval. Where the secretary approves the modification or upgrade, the licensee may immediately implement such modification or upgrade in a publicly accessible version. 3. A licensee who rewrites the source code of its system shall comply with the same standards set forth in subdivisions one and two of this section provided however that the secretary shall examine such source code in the same manner as a new application and shall provide a letter of approval or rejection upon completion of such review within one hundred eighty business days of receipt of such notices except where the secretary requires an extension of time, then an extension of no more than one hundred eighty days shall be authorized. Where the secretary rejects the rewrite, such letter of rejection shall state the reasons for the rejection and steps that the licensee can take to correct such rejection, if any. Where the secretary approves the modification or upgrade, the licensee may immediately implement such modification or upgrade in a publicly accessible version. 4. All modifications, upgrades, and rewrites shall be conducted in a pre-production environment, which shall mean any stage prior to the accessible version. 5. For purposes of this section: (a) "Modify" shall mean altering the source code of the system to alter the way by which the system, or any features within the system, makes decisions. (b) "Upgrade" shall mean altering the source code of the system which gives it new features or functions. (c) "Rewrite" shall mean a change in the source code to such a substantial degree that: (i) it effectively results in a new version of the system; or (ii) the change nullifies all or a substantial amount of the initial findings of the secretary in the operator's original application. (d) "Update" shall mean a change to the source code that includes minor enhancements, improvements, modifications, error corrections, cosmetic changes, or any other change intended to increase the functionality, compatibility, security or performance of the system. (e) "Accessible version" shall mean a version of the software that is available to the public or for private use or that is presently operating within its designated operational parameters.
§ 526. Investigations and examinations. 1. The secretary shall have the power to make such investigations as the secretary shall deem necessary to determine whether any operator or any other person has violated any of the provisions of this article, or whether any licensee has conducted itself in such manner as would justify the revocation of its license, and to the extent necessary therefor, the secretary may require the attendance of and examine any person under oath, and shall have the power to compel the production of all relevant books, records, accounts, documents, source code, and logs. 2. The secretary shall have the power to make such examinations of the books, records, accounts, documents, source code, and logs used in the business of any licensee as the secretary shall deem necessary to determine whether any such licensee has violated any of the provisions of this article. 3. The expenses incurred in making any examination pursuant to this section shall be assessed against and paid by the licensee so examined, except that traveling and subsistence expenses so incurred shall be charged against and paid by licensees in such proportions as the secretary shall deem just and reasonable, and such proportionate charges shall be added to the assessment of the other expenses incurred upon each examination. Upon written notice by the secretary of the total amount of such assessment, the licensee shall become liable for and shall pay such assessment to the secretary. 4. All reports of examinations and investigations, and all correspondence and memoranda concerning or arising out of such examinations or investigations, including any duly authenticated copy or copies thereof in the possession of any licensee or the department, shall be confidential communications, shall not be subject to subpoena and shall not be made public unless, in the judgment of the secretary, the ends of justice and the public advantage will be subserved by the publication thereof, in which event the secretary may publish or authorize the publication of a copy of any such report or other material referred to in this subdivision, or any part thereof, in such manner as the secretary may deem proper.
Any impact assessment conducted pursuant to this subdivision shall be submitted to the department at least thirty days prior to the implementation of the artificial intelligence that is the subject of such assessment.
Each such entity shall file an annual certification of compliance with this section with the chief information officer.
The attorney general, in consultation with the chief information officer, shall have the power to audit the policies filed by entities under this section.
If an entity also has to file any certification of cybersecurity compliance with the department of financial services, such filings shall be done jointly.
1. Every developer and deployer of a high-risk AI system shall comply with the reporting requirements of this section. 2. Together with each report required to be filed under this section, every developer and deployer shall file with the attorney general a copy of the last completed independent audit required by this article. 3. Developers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A developer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; (ii) one report annually following the submission of the first report; and (iii) one report within six months of any substantial change to the high-risk AI system. (b) A developer report under this section shall include: (i) a description of the system including: (A) the uses of the high-risk AI system that the developer intends; and (B) any explicitly unintended or disallowed uses of the high-risk AI system; (ii) an overview of how the high-risk AI system was developed; (iii) an overview of the high-risk AI system's training data; and (iv) any other information necessary to allow a deployer to: (A) understand the outputs and monitor the system for compliance with this article; and (B) fulfill its duties under this article.
4. Deployers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A deployer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after initial deployment; (ii) a second report within one year following the completion and filing of the first report; (iii) one report every two years following the completion and filing of the second report; and (iv) one report within six months of any substantial change to the high-risk AI system. (b) A deployer report under this section shall include: (i) a description of the system including: (A) the deployer's actual, intended, or planned uses of the high-risk AI system with respect to consequential decisions; and (B) whether the deployer is using the high-risk AI system for any developer unintended or disallowed uses; and (ii) an impact assessment including: (A) whether the high-risk AI system poses a risk of algorithmic discrimination and the steps taken to address the risk of algorithmic discrimination; (B) if the high-risk AI system is or will be monetized, how it is or is planned to be monetized; and (C) an evaluation of the costs and benefits to consumers and other end users. (c) A deployer that is also a developer and is required to submit reports under subdivision three of this section may submit a single joint report provided it contains the information required in this subdivision.
6. For high-risk AI systems which are already in deployment at the time of the effective date of this article, developers and deployers shall have eighteen months from such effective date to complete and file the first report and associated independent audit required by this article. (a) Each developer of a high-risk AI system shall thereafter file at least one report annually following the submission of the first report under this subdivision. (b) Each deployer of a high-risk AI system shall thereafter file at least one report every two years following the submission of the first report under this subdivision.
No less than annually, any real estate broker or online housing platform that uses virtual agents to assist with searches for available properties for sale or rental properties, and any online housing platform that uses AI tools, shall have a disparate impact analysis conducted and shall submit a summary of the most recent disparate impact analysis to the attorney general's office.
§ 1712. Documentation and compliance. 1. Developers of artificial intelligence technologies shall submit documentation to the attorney general affirming: (a) The identities and qualifications of professional domain experts involved in the AI technology, pursuant to section seventeen hundred eleven of this article; (b) The specific phases of development in which such professional domain experts contributed; and (c) Any known risks, limitations, or ethical concerns disclosed during development. 2. The attorney general or a duly authorized representative of the attorney general shall issue certificates of compliance to developers who have submitted documentation pursuant to subdivision one of this section and are found to be in compliance. Any technology and developers found to be not in compliance may be subject to investigation and penalties pursuant to section seventeen hundred thirteen of this article.
2. Reporting requirement. On or before March first of every year, a covered business shall report to the department regarding the impact of artificial intelligence on its hiring and the nature of its artificial intelligence use in the calendar year ending the preceding December thirty-first. Such report shall include: (a) Employment data, including but not limited to: (i) An estimate of the number of employees displaced, or whose hours have been reduced, due in full or in part to use of artificial intelligence; (ii) An estimate of the number of employees hired, or whose hours have been increased, due in full or in part to use of artificial intelligence; and (iii) An estimate of the number of positions previously filled that the covered business has decided not to fill due in full or in part to use of artificial intelligence; and (b) Information on the nature of artificial intelligence usage, including but not limited to: (i) Descriptions of the objectives of the use of artificial intelligence; (ii) Information regarding any human oversight of artificial intelligence; (iii) Information on the frequency and length of use of artificial intelligence; (iv) Information on any use of artificial intelligence in relation to sensitive personal data, including storage and access protections related to use of artificial intelligence in relation to such personal data; and (v) Measures in place for oversight, risk reduction, or other protections related to use of artificial intelligence.
6. (a) A developer or deployer that conducts a full pre-deployment evaluation, full impact assessment, or developer annual review of assessments shall: (i) not later than thirty days after completion, submit the evaluation, assessment, or review to the division; (ii) upon request, make the evaluation, assessment, or review available to the legislature; and (iii) not later than thirty days after completion: (A) publish a summary of the evaluation, assessment, or review on the website of the developer or deployer in a manner that is easily accessible to individuals; and (B) submit such summary to the division. (b) A developer or deployer shall retain all evaluations, assessments, and reviews described in this section for a period of not fewer than ten years. (c) A developer or deployer: (i) may redact and segregate any trade secret (as defined in section 1839 of title 18, United States Code) from public disclosure under this subdivision; and (ii) shall redact and segregate personal data from public disclosure under this section.
1. Every developer and deployer of a high-risk AI system shall comply with the reporting requirements of this section. 2. Together with each report required to be filed under this section, every developer and deployer shall file with the attorney general a copy of the last completed independent audit required by this article. 3. Developers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A developer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; (ii) one report annually following the submission of the first report; and (iii) one report within six months of any substantial change to the high-risk AI system. (b) A developer report under this section shall include: (i) a description of the system including: (A) the uses of the high-risk AI system that the developer intends; and (B) any explicitly unintended or disallowed uses of the high-risk AI system; (ii) an overview of how the high-risk AI system was developed; (iii) an overview of the high-risk AI system's training data; and (iv) any other information necessary to allow a deployer to: (A) understand the outputs and monitor the system for compliance with this article; and (B) fulfill its duties under this article.
4. Deployers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A deployer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after initial deployment; (ii) a second report within one year following the completion and filing of the first report; (iii) one report every two years following the completion and filing of the second report; and (iv) one report within six months of any substantial change to the high-risk AI system. (b) A deployer report under this section shall include: (i) a description of the system including: (A) the deployer's actual, intended, or planned uses of the high-risk AI system with respect to consequential decisions; and (B) whether the deployer is using the high-risk AI system for any developer unintended or disallowed uses; and (ii) an impact assessment including: (A) whether the high-risk AI system poses a risk of algorithmic discrimination and the steps taken to address the risk of algorithmic discrimination; (B) if the high-risk AI system is or will be monetized, how it is or is planned to be monetized; and (C) an evaluation of the costs and benefits to consumers and other end users. (c) A deployer that is also a developer and is required to submit reports under subdivision three of this section may submit a single joint report provided it contains the information required in this subdivision.
6. For high-risk AI systems which are already in deployment at the time of the effective date of this article, developers and deployers shall have eighteen months from such effective date to complete and file the first report and associated independent audit required by this article. (a) Each developer of a high-risk AI system shall thereafter file at least one report annually following the submission of the first report under this subdivision. (b) Each deployer of a high-risk AI system shall thereafter file at least one report every two years following the submission of the first report under this subdivision.
Any impact assessment conducted pursuant to this subdivision shall be submitted to the department at least thirty days prior to the implementation of the artificial intelligence that is the subject of such assessment.
6. Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general and in a form and manner prescribed by the attorney general, the general statement or documentation described in subdivision two of this section. The attorney general may evaluate such general statement or documentation to ensure compliance with the provisions of this section. In disclosing such general statement or documentation to the attorney general pursuant to this subdivision, the developer may designate such general statement or documentation as including any information that is exempt from disclosure pursuant to subdivision five of this section or article six of the public officers law. To the extent such general statement or documentation includes such information, such general statement or documentation shall be exempt from disclosure. To the extent any information contained in such general statement or documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
9. Beginning on January first, two thousand twenty-seven, the attorney general may require that a deployer, or a third party contracted by the deployer pursuant to subdivision three of this section, as applicable, disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general, and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subdivision two of this section, the impact assessment completed pursuant to subdivision three of this section; or records maintained pursuant to paragraph (e) of subdivision three of this section. The attorney general may evaluate such risk management policy, impact assessment or records to ensure compliance with the provisions of this section. In disclosing such risk management policy, impact assessment or records to the attorney general pursuant to this subdivision, the deployer or third-party contractor, as applicable, may designate such risk management policy, impact assessment or records as including any information that is exempt from disclosure pursuant to subdivision eight of this section or article six of the public officers law. To the extent such risk management policy, impact assessment, or records include such information, such risk management policy, impact assessment, or records shall be exempt from disclosure. To the extent any information contained in such risk management policy, impact assessment, or record is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
4. Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general and in a form and manner prescribed by the attorney general, any documentation maintained pursuant to this section. The attorney general may evaluate such documentation to ensure compliance with the provisions of this section. In disclosing any documentation to the attorney general pursuant to this subdivision, the developer may designate such documentation as including any information that is exempt from disclosure pursuant to subdivision three of this section or article six of the public officers law. To the extent such documentation includes such information, such documentation shall be exempt from disclosure. To the extent any information contained in such documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
2. Reporting requirement. On or before March first of every year, a covered business shall report to the department regarding the impact of artificial intelligence on its hiring and the nature of its artificial intelligence use in the calendar year ending the preceding December thirty-first. Such report shall include: (a) Employment data, including but not limited to: (i) An estimate of the number of employees displaced, or whose hours have been reduced, due in full or in part to use of artificial intelligence; (ii) An estimate of the number of employees hired, or whose hours have been increased, due in full or in part to use of artificial intelligence; and (iii) An estimate of the number of positions previously filled that the covered business has decided not to fill due in full or in part to use of artificial intelligence; and (b) Information on the nature of artificial intelligence usage, including but not limited to: (i) Descriptions of the objectives of the use of artificial intelligence; (ii) Information regarding any human oversight of artificial intelligence; (iii) Information on the frequency and length of use of artificial intelligence; (iv) Information on any use of artificial intelligence in relation to sensitive personal data, including storage and access protections related to use of artificial intelligence in relation to such personal data; and (v) Measures in place for oversight, risk reduction, or other protections related to use of artificial intelligence.
(f) (i) Each notice required under this section shall include a statement indicating whether the employment losses described are the result, in whole or in part, of the introduction, expansion, or adoption of artificial intelligence (AI) systems, automation technologies, or machine-based processes that have replaced or materially altered the duties of affected employees. (ii) Such statement shall also include, to the extent known by the employer at the time of notice: (A) The estimated percentage of positions affected due to such automation or AI integration; and (B) A brief description of the technology or process that contributed to the reduction.
(B)(1) Each health plan issuer, annually, on or before the first day of March, shall file a report with the superintendent of insurance covering all of the following information: (a) Each provider in the health plan issuer's network; (b) The number of covered persons enrolled in health benefit plans issued by the health plan issuer in this state in the preceding calendar year; (c) Whether the health plan issuer used, is using, or will use artificial intelligence-based algorithms in utilization review processes for those health benefit plans and, if so, all of the following information: (i) The algorithm criteria; (ii) Data sets used to train the algorithm; (iii) The algorithm itself; (iv) Outcomes of the software in which the algorithm is used; (v) Data on the amount of time a human reviewer spends examining an adverse determination prior to signing off on each such determination. (2) The health plan issuer shall submit the report in a form prescribed by the superintendent. An officer of the health plan issuer shall verify the contents of the report.
(D) The superintendent may audit a health plan issuer's use of an artificial intelligence-based algorithm at any time and may contract with a third party for the purposes of conducting such an audit.
(B)(1) Each health plan issuer, annually, on or before the first day of March, shall file a report with the superintendent of insurance covering all of the following information: (a) Each provider in the health plan issuer's network; (b) The number of covered persons enrolled in health benefit plans issued by the health plan issuer in this state in the preceding calendar year; (c) Whether the health plan issuer used, is using, or will use artificial intelligence-based algorithms in utilization review processes for those health benefit plans and, if so, all of the following information: (i) The algorithm criteria; (ii) Data sets used to train the algorithm; (iii) The algorithm itself; (iv) Outcomes of the software in which the algorithm is used; (v) Data on the amount of time a human reviewer spends examining an adverse determination prior to signing off on each such determination. (2) The health plan issuer shall submit the report in a form prescribed by the superintendent. An officer of the health plan issuer shall verify the contents of the report. (3) The superintendent shall publish a copy of the report on the web site of the department of insurance. The health plan issuer shall publish a copy of the report on the health plan issuer's publicly accessible web site.
(a) Compliance statement required.--A facility using artificial intelligence-based algorithms for clinical decision making shall annually file with the department in the form and manner prescribed by the department an artificial intelligence compliance statement. (b) Contents.--Each compliance statement must: (1) Summarize the function and scope of artificial intelligence-based algorithms used for clinical decision making. (2) Provide a logic or decision tree of artificial intelligence-based algorithms used for clinical decision making. (3) Provide a description of each training data set used by artificial intelligence-based algorithms for clinical decision making, including the source of the data. (4) Attest that the artificial intelligence-based algorithms and the training data sets comply with section 3503 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the facility for overseeing and validating the performance and compliance of the artificial intelligence-based algorithms in accordance with section 3503.
(a) Compliance statement required.--An insurer using artificial intelligence-based algorithms in the utilization review process shall annually file with the department in the form and manner prescribed by the department an artificial intelligence compliance statement. (b) Contents.--Each compliance statement must: (1) Summarize the function and scope of the artificial intelligence-based algorithms used for utilization review. (2) Provide a logic or decision tree of artificial intelligence-based algorithms used for utilization review. (3) Provide a description of each training data set used by artificial intelligence-based algorithms for utilization review, including the source of the data. (4) Attest that the artificial intelligence-based algorithms and the training data sets comply with section 5203 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the insurer for overseeing and validating the performance and compliance of the artificial intelligence-based algorithms in accordance with section 5203.
(a) Compliance statement required.--An MA or CHIP managed care plan using artificial intelligence-based algorithms in the utilization review process shall annually file with the department, in the form and manner prescribed by the department, an artificial intelligence compliance statement. (b) Contents.--Each compliance statement must: (1) Summarize the function and scope of the artificial intelligence-based algorithms used for utilization review. (2) Provide a logic or decision tree of artificial intelligence-based algorithms used for utilization review. (3) Provide a description of each training data set used by artificial intelligence-based algorithms for utilization review, including the source of the data. (4) Attest that the artificial intelligence-based algorithms and the training data sets comply with section 5303 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the MA or CHIP managed care plan for overseeing and validating the performance and compliance of the artificial intelligence-based algorithms in accordance with section 5303.
The department may request additional information and evidence from a facility regarding the items provided under sections 3502 (relating to disclosure), 3503 (relating to responsible use) and 3504 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
The department may request additional information and evidence from an insurer regarding the items provided under sections 5202 (relating to disclosure), 5203 (relating to responsible use) and 5204 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
The department may request additional information and evidence from an MA or CHIP managed care plan regarding the items provided under section 5302 (relating to disclosure), 5303 (relating to responsible use) and 5304 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
(e) Filing.--A supplier shall file the policy described under subsection (a) with the bureau, in the form and manner as prescribed by the bureau, along with: (1) The name and address of the supplier. (2) The name of the chatbot. (3) An annual filing fee as prescribed by the bureau. (f) Additional information.--A supplier may provide to the bureau, in the form and manner prescribed by the bureau: (1) Any revision to the policy described under subsection (a) and filed in accordance with subsection (e). (2) Any other documentation that the supplier deems appropriate to provide. (g) Compliance.--A supplier shall comply with the requirements of the policy filed in accordance with this section.
(a) Compliance statement required.--A facility using artificial-intelligence-based algorithms for clinical decision making shall annually file with the department in the form and manner prescribed by the department an artificial intelligence compliance statement. (b) Contents.--A compliance statement must: (1) Summarize the function and scope of artificial-intelligence-based algorithms used for clinical decision making. (2) Provide a logic or decision tree of artificial-intelligence-based algorithms used for clinical decision making. (3) Provide a description of each training data set used by artificial-intelligence-based algorithms for clinical decision making, including the source of the data. (4) Attest that the artificial-intelligence-based algorithms and the training data sets comply with section 3503 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the facility for overseeing and validating the performance and compliance of the artificial-intelligence-based algorithms in accordance with section 3503.
(a) Compliance statement required.--An insurer using artificial-intelligence-based algorithms in the utilization review process shall annually file with the department in the form and manner prescribed by the department an artificial intelligence compliance statement. (b) Contents.--A compliance statement must: (1) Summarize the function and scope of the artificial-intelligence-based algorithms used for utilization review. (2) Provide a logic or decision tree of artificial-intelligence-based algorithms used for utilization review. (3) Provide a description of each training data set used by artificial-intelligence-based algorithms for utilization review, including the source of the data. (4) Attest that the artificial-intelligence-based algorithms and the training data sets comply with section 5203 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the insurer for overseeing and validating the performance and compliance of the artificial-intelligence-based algorithms in accordance with section 5203.
(a) Compliance statement required.--An MA or CHIP managed care plan using artificial-intelligence-based algorithms in the utilization review process shall annually file with the department, in the form and manner prescribed by the department, an artificial intelligence compliance statement. (b) Contents.--A compliance statement must: (1) Summarize the function and scope of the artificial-intelligence-based algorithms used for utilization review. (2) Provide a logic or decision tree of artificial-intelligence-based algorithms used for utilization review. (3) Provide a description of each training data set used by artificial-intelligence-based algorithms for utilization review, including the source of the data. (4) Attest that the artificial-intelligence-based algorithms and the training data sets comply with section 5303 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the MA or CHIP managed care plan for overseeing and validating the performance and compliance of the artificial-intelligence-based algorithms in accordance with section 5303.
The department may request additional information and evidence from a facility regarding the items provided under sections 3502 (relating to disclosure), 3503 (relating to responsible use) and 3504 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
The department may request additional information and evidence from an insurer regarding the items provided under sections 5202 (relating to disclosure), 5203 (relating to responsible use) and 5204 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
The department may request additional information and evidence from an MA or CHIP managed care plan regarding the items provided under section 5302 (relating to disclosure), 5303 (relating to responsible use) and 5304 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Insurers subject to this chapter shall disclose to the office of the health insurance commissioner ("OHIC") and the department of business regulation ("DBR") how they use artificial intelligence to manage healthcare claims and coverage including, but not limited to, the types of artificial intelligence models used, the role of artificial intelligence in the decision-making process, training datasets, performance metrics, governance and risk management policies, and the decisions on healthcare claims and coverage where artificial intelligence made, or was a substantial factor in making, the decisions. Insurers shall submit to the office of the health insurance commissioner and the department of business regulation, upon request, all information, including documents and software, that permits enforcement of this chapter.
Insurers subject to this chapter shall disclose to the office of the health insurance commissioner ("OHIC") and the department of business regulation ("DBR") how they use artificial intelligence to manage healthcare claims and coverage including, but not limited to, the types of artificial intelligence models used, the role of artificial intelligence in the decision-making process, training datasets, performance metrics, governance and risk management policies, and the decisions on healthcare claims and coverage where artificial intelligence made, or was a substantial factor in making, the decisions. Insurers shall submit to the office of the health insurance commissioner and the department of business regulation, upon request, all information, including documents and software, that permits enforcement of this chapter.
(G) The Attorney General may require that a developer disclose to the Attorney General, no later than ninety days after the request and in a form and manner prescribed by the Attorney General, the statement or documentation described in subsection (B). The Attorney General may evaluate such statement or documentation to ensure compliance with this chapter, and the statement or documentation is not subject to disclosure under the South Carolina Freedom of Information Act. In a disclosure made pursuant to this subsection, a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
(I) The Attorney General may require that a deployer, or a third party contracted by the deployer, disclose to him, no later than ninety days after the request and in a form and manner prescribed by him, the risk management policy implemented pursuant to subsection (B), the impact assessment completed pursuant to subsection (C), or the records maintained pursuant to subsection (C)(6). The Attorney General may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and records are not subject to disclosure under the South Carolina Freedom of Information Act. In a disclosure made pursuant to this subsection, a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Any health carrier that makes determinations or provides advice about third-party payment for any health care services using an artificial intelligence, algorithm, or other software tool for the purpose of utilization review or that contracts with or otherwise works through an entity that uses an artificial intelligence, algorithm, or other software tool for the purpose of utilization review shall compile an annual report detailing how, during the preceding fiscal year, the artificial intelligence, algorithm, or other software tool was used in the utilization review process and the nature and degree of human review and oversight that was used to afform or negate any determinations. The report must be forwarded to the Executive Board of the Legislative Research Council on or before December first of each year.
The Division of Insurance may, at any time, inspect a health carrier's automated system to ensure that the health carrier's use of artificial intelligence, algorithms, or other software tools is in compliance with sections 1 and 2 of this Act. If the division determines that the automated system is not in compliance, the division shall notify the attorney general who may direct the health carrier to cease and desist from engaging in further noncompliant activities.
Each carrier shall (i) publicly disclose, if applicable, to the Bureau the carrier's use of AI to manage insurance claims and coverage, including in underlying algorithms, data used, and resulting determinations; (ii) submit to the Bureau, upon request, all information, including documents and software, necessary for enforcement of this subdivision;
(a) Every developer and deployer of an automated decision system used in a consequential decision shall comply with the reporting requirements of this section. Regardless of final findings, reports shall be filed with the Attorney General prior to deployment of an automated decision system used in a consequential decision and then annually, or after each substantial change to the system, whichever comes first. (b) Together with each report required to be filed under this section, developers and deployers shall file with the Attorney General a copy of the last completed independent audit required by this subchapter and a legal attestation that the automated decision system used in a consequential decision: (1) does not violate any provision of this subchapter; or (2) may violate or does violate one or more provisions of this article, that there is a plan of remediation to bring the automated decision system into compliance with this subchapter, and a summary of the plan of remediation.
(c) Developers of automated decision systems shall file with the Attorney General a report containing the following: (1) a description of the system including: (A) a description of the system's software stack; (B) the purpose of the system and its expected benefits; and (C) the system's current and intended uses, including what consequential decisions it will support and what stakeholders will be impacted; (2) the intended outputs of the system and whether the outputs can be or are otherwise appropriate to be used for any purpose not previously articulated; (3) the methods for training of their models including: (A) any pre-processing steps taken to prepare datasets for the training of a model underlying an automated decision system; (B) descriptions of the datasets upon which models were trained and evaluated, how and why datasets were collected and the sources of those datasets, and how that training data will be used and maintained; (C) the quality and appropriateness of the data used in the automated decision system's design, development, testing, and operation; (D) whether the data contains sufficient breadth to address the range of real-world inputs the automated decision system might encounter and how any data gaps have been addressed; and (E) steps taken to ensure compliance with privacy, data privacy, data security, and copyright laws; (4) use and data management policies; (5) any other information necessary to allow the deployer to understand the outputs and monitor the system for compliance with this subchapter; (6) any other information necessary to allow the deployer to comply with the requirements of subsection (d) of this section; (7) a description of the system's capabilities and any developer-imposed limitations, including capabilities outside of its intended use, when the system should not be used, any safeguards or guardrails in place to protect against unintended, inappropriate, or disallowed uses, and testing of any safeguards or guardrails; (8) an internal risk assessment including documentation and results of testing conducted to identify all reasonably foreseeable risks related to algorithmic discrimination, validity and reliability, privacy and autonomy, and safety and security, as well as actions taken to address those risks, and subsequent testing to assess the efficacy of actions taken to address risks; and (9) whether the system should be monitored and, if so, how the system should be monitored.
(d) Deployers of automated decision systems used in consequential decisions shall file with the Attorney General a report containing the following: (1) a description of the system, including: (A) a description of the system's software stack; (B) the purpose of the system and its expected benefits; and (C) the system's current and intended uses, including what consequential decisions it will support and what stakeholders will be impacted; (2) the intended outputs of the system and whether the outputs can be or are otherwise appropriate to be used for any purpose not previously articulated; (3) whether the deployer collects revenue or plans to collect revenue from use of the automated decision system in a consequential decision and, if so, how it monetizes or plans to monetize use of the system; (4) whether the system is designed to make consequential decisions itself or whether and how it supports consequential decisions; (5) a description of the system's capabilities and any deployer-imposed limitations, including capabilities outside of its intended use, when the system should not be used, any safeguards or guardrails in place to protect against unintended, inappropriate, or disallowed uses, and testing of any safeguards or guardrails; (6) an assessment of the relative benefits and costs to the consumer given the system's purpose, capabilities, and probable use cases; (7) an internal risk assessment including documentation and results of testing conducted to identify all reasonably foreseeable risks related to algorithmic discrimination, accuracy and reliability, privacy and autonomy, and safety and security, as well as actions taken to address those risks, and subsequent testing to assess the efficacy of actions taken to address risks; and (8) whether the system should be monitored and, if so, how the system should be monitored.
(f) For automated decision systems already in deployment for use in consequential decisions on or before July 1, 2025, developers and deployers shall not later than 18 months after July 1, 2025 complete and file the reports and complete the independent audit required by this subchapter.
(c) The Attorney General may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subsection (a) of this section in a form and manner prescribed by the Attorney General. The Attorney General may evaluate the risk management policy and program to ensure compliance with this section.
(a) Each deployer of an inherently dangerous artificial intelligence system shall: (1) submit to the Division of Artificial Intelligence an Artificial Intelligence System Safety and Impact Assessment prior to deploying the inherently dangerous artificial intelligence system in this State, and every two years thereafter; and (2) submit to the Division of Artificial Intelligence an updated Artificial Intelligence System Safety and Impact Assessment if the deployer makes a material and substantial change to the inherently dangerous artificial intelligence system that includes: (A) the purpose for which the system is used for; or (B) the type of data the system processes or uses for training purposes. (b) Each Artificial Intelligence System Safety and Impact Assessment pursuant to subsection (a) of this section shall include, with respect to the inherently dangerous artificial intelligence system: (1) the purpose of the system; (2) the deployment context and intended use cases; (3) the benefits of use; (4) any foreseeable risk of unintended or unauthorized uses and the steps taken, to the extent reasonable, to mitigate the risk; (5) whether the model is proprietary; (6) a description of the data the system processes or uses for training purposes; (7) whether the data the system uses for training purposes has been processed to remove personal information, copyrighted information, and do not train data; (8) a description of transparency measures, including identifying to individuals when the system is in use; (9) identification of any third-party artificial intelligence systems or datasets the deployer relies on to train or operate the system, if applicable; (10) whether the developer of the system, if different than the deployer, disclosed the information pursuant to this subsection as well as the results of testing, vulnerabilities, and the parameters for safe and intended use; (11) a description of the data that the system, once deployed, processes as inputs; (12) a description of postdeployment monitoring and user safeguards, including a description of the oversight process in place to address issues as issues arise; and (13) a description of how the model impacts consequential decisions or the collection of biometric data.
(c)(1) Whenever the Attorney General has reasonable cause to believe that any person has engaged in or is engaging in any violation of this subchapter, the Attorney General may issue a civil investigative demand. (2) In rendering and furnishing any information requested pursuant to a civil investigative demand, a developer or deployer may redact or omit any trade secrets or information protected from disclosure by State or federal law. If a developer or deployer refuses to disclose or redacts or omits information based on the exemption from disclosure of trade secrets, the developer or deployer shall affirmatively state to the Attorney General that the basis for nondisclosure, redaction, or omission is because the information is a trade secret. (3) To the extent that any information requested pursuant to a civil investigative demand is subject to attorney-client privilege or work-product protection, disclosure of the information shall not constitute a waiver of the privilege or protection. (4) Any information, statement, or documentation provided to the Attorney General pursuant to this subsection shall be exempt from public inspection and copying under the Public Records Act.
(c) To file a policy with the Office of the Attorney General under this section, a supplier of a mental health chatbot: (1) shall provide to the Office, in the form and manner prescribed by the Office: (A) the name and address of the supplier; (B) the name of the mental health chatbot supplied by the supplier; (C) the written policy described in subsection (b) of this section; and (D) a $100.00 filing fee; and (2) may provide to the Office: (A) any revisions to a policy filed under this section; and (B) any other documentation that the supplier elects to provide.
A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system.
(a) Not later than December 31 of each year, a medical facility, research facility, company, or nonprofit organization subject to this §16-5EE-1 et seq. shall certify to the attorney general that the facility, company, or organization is in compliance with this chapter. (b) An attorney representing a medical facility, research facility, company, or nonprofit organization subject to this chapter shall submit the certification required under Subsection §16-5EE-8(a).