Ins. § 15-10B-05.1(c)(9), (f)
Plain Language
Covered entities must review and, if necessary, revise the performance, use, and outcomes of their AI utilization review tools at least quarterly to maximize accuracy and reliability. The bill adds a new requirement that these quarterly reviews must include a human evaluation of real-world health outcomes resulting from AI-driven decisions. The findings from this human evaluation must then be used to improve the AI tool, making its decisions safer, more accurate, and more responsive to patient needs. This creates a continuous feedback loop: human clinicians assess actual patient outcomes, and those assessments must drive concrete improvements to the AI system.
Statutory Text
(9) the performance, use, and outcomes of an artificial intelligence, algorithm, or other software tool are reviewed and revised, if necessary and at least on a quarterly basis, to maximize accuracy and reliability, IN ACCORDANCE WITH SUBSECTION (F) OF THIS SECTION; (F) A REVIEW OF THE PERFORMANCE, USE, AND OUTCOMES OF ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOLS UNDER SUBSECTION (C)(9) OF THIS SECTION SHALL INCLUDE: (1) A HUMAN EVALUATION OF THE REAL–WORLD HEALTH OUTCOMES OF DECISIONS MADE BY THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL; AND (2) USE OF THE FINDINGS MADE BY THE EVALUATION REQUIRED UNDER ITEM (1) OF THIS SUBSECTION TO IMPROVE THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL AND MAKE THE DECISIONS OF THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL SAFER, MORE ACCURATE, AND MORE RESPONSIVE TO PATIENT NEEDS.