Washington SHB 2157 regulates high-risk AI systems that autonomously make or substantially factor into consequential decisions affecting consumers in areas such as employment, housing, credit, healthcare, education, and insurance. Developers must provide deployers with documentation covering intended uses, known discrimination risks, performance evaluations, and mitigation measures, and must ensure synthetic content outputs are identifiable. Deployers must implement a risk management program, complete pre-deployment impact assessments, disclose AI use and system details to consumers before interaction, and provide explanations for adverse decisions. Extensive exemptions exist for financial institutions subject to ECOA/FCRA, insurers regulated by the state insurance commissioner, HIPAA-covered entities, federally approved AI systems, and chatbots with acceptable use policies prohibiting discriminatory content. Enforcement is exclusively via private right of action with injunctive relief and attorneys' fees, subject to a 45-day cure affirmative defense. The NIST AI RMF and ISO/IEC 42001 serve as safe harbor frameworks.