Government agencies that develop, procure, or deploy AI systems must maintain inventories of those systems, conduct impact assessments before deploying AI in consequential public-facing roles, meet defined procurement standards, and disclose AI use to affected individuals. Vendors selling AI to government agencies must be able to demonstrate standards compliance.
(d) (1) A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section. (2) Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential.
(a) A state agency shall not award a contract for a high-risk automated decision system to a person who has violated any of the following: (1) The Unruh Civil Rights Act (Section 51 of the Civil Code). (2) The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code). (3) Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code. (b) As used in this section, "high-risk automated decision system" has the same meaning as defined in Section 22756 of the Business and Professions Code.
(b) (1) No state agency, or any entity acting on behalf of a state agency, shall, directly or indirectly, utilize or apply any artificial intelligence technology in performing any function that (A) is related to the delivery of any public assistance benefit to individuals in the state by such agency, or (B) will have a material impact on the rights, civil liberties, safety or welfare of individuals in the state, unless such utilization or application is specifically authorized by law. (2) No state agency shall authorize any procurement, purchase or acquisition of any artificial intelligence technology, except where the use of such system is specifically authorized by law. (3) If a state agency is authorized to procure, purchase or acquire an artificial intelligence technology, the state agency shall contract with an independent auditor to complete a bias audit pursuant to subsection (a) of section 8 of this act. (c) Any bias audit completed pursuant to subdivision (3) of subsection (b) of this section shall be submitted to the Commissioner of Administrative Services, in a form and manner prescribed by the commissioner, and posted on the agency's Internet web site not later than sixty days prior to deployment of such artificial intelligence technology. Any agency may redact any data in such impact statement to remove personally identifiable information of any individual.
(b) (1) Not later than December 31, 2023, and annually thereafter, the Department of Administrative Services shall conduct an inventory of all systems that employ artificial intelligence and are in use by any state agency. Each such inventory shall include at least the following information for each such system: (A) The name of such system and the vendor, if any, that provided such system; (B) A description of the general capabilities and uses of such system; (C) Whether such system was used to independently make, inform or materially support a conclusion, decision or judgment; and (D) Whether such system underwent an impact assessment prior to implementation. (2) The Department of Administrative Services shall make each inventory conducted pursuant to subdivision (1) of this subsection publicly available on the state's open data portal.
(b) (1) Not later than February 1, 2024, the Office of Policy and Management shall develop and establish policies and procedures concerning the development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence and are in use by state agencies. Such policies and procedures shall, at a minimum, include policies and procedures that: (A) Govern the procurement, implementation and ongoing assessment of such systems by state agencies; (B) Are sufficient to ensure that no such system (i) results in any unlawful discrimination against any individual or group of individuals, or (ii) has any unlawful disparate impact on any individual or group of individuals on the basis of any actual or perceived differentiating characteristic, including, but not limited to, age, genetic information, color, ethnicity, race, creed, religion, national origin, ancestry, sex, gender identity or expression, sexual orientation, marital status, familial status, pregnancy, veteran status, disability or lawful source of income; (C) Require a state agency to assess the likely impact of any such system before implementing such system; and (D) Provide for the Department of Administrative Services to perform ongoing assessments of such systems to ensure that no such system results in any unlawful discrimination or disparate impact described in subparagraph (B) of this subdivision. (2) The Office of Policy and Management may revise the policies and procedures established pursuant to subdivision (1) of this subsection if the Secretary of the Office of Policy and Management determines, in said secretary's discretion, that such revision is necessary. (3) The Office of Policy and Management shall post the policies and procedures established pursuant to subdivision (1) of this subsection, and any revision made to such policies and procedures pursuant to subdivision (2) of this subsection, on the office's Internet web site.
(c) Beginning on February 1, 2024, no state agency shall implement any system that employs artificial intelligence (1) unless the state agency has performed an impact assessment, in accordance with the policies and procedures established pursuant to subsection (b) of this section, to ensure that such system will not result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (b) of this section, or (2) if the head of such state agency determines, in such agency head's discretion, that such system will result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (b) of this section.
(b) (1) Not later than December 31, 2023, and annually thereafter, the Judicial Department shall conduct an inventory of the department's systems that employ artificial intelligence. Each such inventory shall include at least the following information for each such system: (A) The name of such system and the vendor, if any, that provided such system; (B) A description of the general capabilities and uses of such system; (C) Whether such system was used to independently make, inform or materially support a conclusion, decision or judgment; and (D) Whether such system underwent an impact assessment prior to implementation. (2) The Judicial Department shall make each inventory conducted pursuant to subdivision (1) of this subsection publicly available on the department's Internet web site.
(c) (1) Not later than February 1, 2024, the Judicial Department shall develop and establish policies and procedures concerning the department's development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence. Such policies and procedures shall, at a minimum, include policies and procedures that: (A) Govern the department's procurement, implementation and ongoing assessment of such systems; (B) Are sufficient to ensure that no such system (i) results in any unlawful discrimination against any individual or group of individuals, or (ii) has any unlawful disparate impact on any individual or group of individuals on the basis of any actual or perceived differentiating characteristic, including, but not limited to, age, genetic information, color, ethnicity, race, creed, religion, national origin, ancestry, sex, gender identity or expression, sexual orientation, marital status, familial status, pregnancy, veteran status, disability or lawful source of income; (C) Require the department to assess the likely impact of any such system before implementing such system; and (D) Provide for ongoing assessments of such systems to ensure that no such system results in any unlawful discrimination or disparate impact described in subparagraph (B) of this subdivision. (2) The Judicial Department may revise the policies and procedures established pursuant to subdivision (1) of this subsection if the Chief Court Administrator determines, in said administrator's discretion, that such revision is necessary. (3) The Judicial Department shall post the policies and procedures established pursuant to subdivision (1) of this subsection, and any revision made to such policies and procedures pursuant to subdivision (2) of this subsection, on the department's Internet web site.
(d) Beginning on February 1, 2024, the Judicial Department shall: (1) Not implement any system that employs artificial intelligence (A) unless the department has performed an impact assessment, in accordance with the policies and procedures established pursuant to subsection (c) of this section, to ensure that such system will not result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (c) of this section, or (B) if the Chief Court Administrator determines, in said administrator's discretion, that such system will result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (c) of this section; and (2) Perform ongoing assessments of the department's systems that employ artificial intelligence to ensure that no such system shall result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (c) of this section.
(3)(b) Beginning July 1, 2026, a governmental entity may not accept a bid on, a proposal for, or a reply to, or enter into a contract with, an entity to provide artificial intelligence technology, software, or products, including as a portion or an option to the products or services provided under the contract, unless the entity provides the governmental entity with an affidavit signed by an officer or a representative of the entity under penalty of perjury attesting that the entity does not meet any of the criteria in paragraph (7)(a), paragraph (7)(b), or paragraph (7)(c). (7) A governmental entity may not knowingly enter into a contract with an entity for artificial intelligence technology, software, or products, including as a portion or an option to the products or services provided under the contract, if: (a) The entity is owned by the government of a foreign country of concern; (b) A government of a foreign country of concern has a controlling interest in the entity; or (c) The entity is organized under the laws of or has its principal place of business in a foreign country of concern.
(b) Beginning July 1, 2026, a governmental entity may not accept a bid on, a proposal for, or a reply to, or enter into a contract with, an entity to provide artificial intelligence technology, software, or products, including as a portion or an option to the products or services provided under the contract, unless the entity provides the governmental entity with an affidavit signed by an officer or a representative of the entity under penalty of perjury attesting that the entity does not meet any of the criteria in paragraph (7)(a), paragraph (7)(b), or paragraph (7)(c). (7) A governmental entity may not knowingly enter into a contract with an entity for artificial intelligence technology, software, or products, including as a portion or an option to the products or services provided under the contract, if: (a) The entity is owned by the government of a foreign country of concern; (b) A government of a foreign country of concern has a controlling interest in the entity; or (c) The entity is organized under the laws of or has its principal place of business in a foreign country of concern.
Any agency or department of the commonwealth, or any entity acting on behalf of an agency or department, shall be prohibited from, directly or indirectly, utilizing or applying any automated decision system in performing any function that: (i) is related to the delivery of any public assistance benefit; (ii) will have a material impact on the rights, civil liberties, safety, or welfare of any individual within the commonwealth; or (iii) affects any statutorily or constitutionally provided right of an individual; unless such utilization or application is specifically authorized in law.
a) No executive office, department, division, agency, or commission of the commonwealth shall authorize any procurement, purchase, or acquisition of any service or system utilizing, or relying on, automated decision systems, except where the use of such system is specifically authorized in law.
b) No state agency shall utilize or apply any automated decision system unless the agency, or an entity acting on behalf of such state agency, shall have conducted an impact assessment for the application and use of such automated decision system. Following the first impact assessment, an impact assessment shall be conducted at least once every two years. An impact assessment shall be conducted prior to any material change to the automated decision-making system that may change the outcome or effect of such system. Such impact assessments shall include: i) a description of the objectives of the automated decision system; ii) an evaluation of the ability of the automated decision system to achieve its stated objectives; iii) a description and evaluation of the objectives and development of the automated decision system including: 1) A summary of the underlying algorithms, computational modes, and artificial intelligence tools that are used within the automated decision system; and 2) The design and training data used to develop the automated decision-making process. iv) testing for: 1) Accuracy, fairness, bias, and discrimination, and an assessment of whether the use of the automated decision-making system produces discriminatory results on the basis of a consumer's or a class of consumers' actual or perceived race, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, source of income, or disability and outlines mitigations for any identified performance differences in outcomes across relevant groups impacted by such use; 2) Any cybersecurity vulnerabilities and privacy risks resulting from the deployment and use of the automated decision-making system, and the development or existence of safeguards to mitigate the risks; 3) Any public health or safety risks resulting from the deployment and use of the automated decision-making system; 4) Any reasonably foreseeable misuse of the automated decision-making system and the development or existence of safeguards against such misuse; v) the extent to which the deployment and use of the automated decision-making system requires the input of sensitive and personal data, how that data is used and stored, and any control users may have over their data; and vi) the notification mechanism or procedure, if any, by which individuals impacted by the utilization of the automated decision-making system may be notified of the use of such automated decision-making system and of the individual's personal data, and informed of their rights and options relating to such use. c) Notwithstanding the provisions of this section or any other law, if an impact assessment finds that the automated decision-making system produces discriminatory or biased outcomes, the state agency shall cease any utilization, application, or function of such automated decision-making system, and of any information produced using that system. d) Any impact assessment conducted pursuant to this section shall be submitted to the governor, the president of the senate, and the speaker of the house at least 60 days prior to the implementation of the automated decision-making system that is the subject of such assessment. The impact statement of an automated decision-making system that is approved and utilized, shall be published on the website of the relevant agency. If the state agency makes a determination that the disclosure of any information required in the impact assessment would result in a substantial negative impact on health or safety of the public, infringe upon the privacy rights of individuals, or significantly impact the state agency's ability to protect its information technology, it may redact such information, provided that an explanatory statement on the process by which the state agency made such determination is published along with the redacted impact assessment.
C. Any use of covered artificial intelligence in a criminal investigation by a law-enforcement officer shall be disclosed in the official police report filed for such investigation. Upon arrest or issuance of a summons following a criminal investigation, the official police report shall be submitted to the attorney for the Commonwealth and provided to counsel for the individual under investigation or directly to the individual under investigation if not represented by counsel. Any use of covered artificial intelligence by the law-enforcement agency in a criminal investigation subsequent to arrest shall be disclosed to the attorney for the Commonwealth and the individual under investigation as soon as practicable but no later than 30 calendar days following such use. Disclosure of the use of covered artificial intelligence in the official police report shall include: 1. The name and a description of the covered artificial intelligence; and 2. A brief description of the covered artificial intelligence's role in the investigation, including whether it was used to generate an investigative lead or identify or aid in the identification of a suspect, witness, or victim.