The New York AI Act imposes obligations on developers and deployers of high-risk AI systems used in consequential decisions (employment, housing, credit, healthcare, education, law enforcement, legal services, and financial services). Core obligations include a duty of reasonable care to prevent algorithmic discrimination, mandatory independent third-party audits on a recurring schedule, periodic reporting to the Attorney General, and implementation of a documented risk management program aligned with NIST AI RMF. End users must receive advance notice before AI-driven consequential decisions, the right to opt out in favor of human decision-making, and a post-decision appeal with meaningful human review. The bill categorically prohibits social scoring AI systems. Enforcement is through both AG action (injunctive relief, up to $20,000 per violation) and a private right of action with a plaintiff-friendly presumption at the motion-to-dismiss stage. Audit requirements take effect two years after enactment; all other provisions take effect one year after enactment.