The New York AI Act imposes comprehensive obligations on developers and deployers of high-risk AI systems used in consequential decisions affecting employment, education, housing, healthcare, financial services, law enforcement, and legal services. Core requirements include a duty of reasonable care to prevent algorithmic discrimination, mandatory independent third-party audits, periodic reporting to the Attorney General, and implementation of a risk management policy and program aligned with the NIST AI RMF. Deployers must provide end users with advance notice, opt-out rights, and post-decision appeal with meaningful human review. The bill prohibits social scoring AI systems and includes whistleblower protections. Enforcement is through the Attorney General (injunctions, up to $20,000 per violation, restitution) and a private right of action with compensatory damages, legal fees, and a plaintiff-favorable rebuttable presumption at the motion-to-dismiss stage. The audit requirements take effect two years after enactment; all other provisions take effect one year after enactment.