Establishes Chapter 93M of the Massachusetts General Laws, imposing accountability and transparency obligations on developers and deployers of AI systems, with heightened requirements for high-risk AI systems that materially influence consequential decisions in employment, housing, healthcare, lending, insurance, education, and government services. Developers must exercise reasonable care to mitigate algorithmic discrimination, provide deployers with documentation on intended uses, risks, and training data, notify the AG and deployers of discrimination risks within 90 days, and publish a public summary on their website. Deployers of high-risk AI must maintain NIST-aligned risk management programs, conduct annual impact assessments, notify consumers of AI-influenced consequential decisions, provide explanations and appeal processes, and publicly disclose their high-risk AI systems. Corporations using AI to target consumers or influence behavior must make additional disclosures. Enforced exclusively by the Attorney General under Chapter 93A; no private right of action. Exemptions exist for small businesses under 50 employees, low-risk AI systems, and entities subject to equivalent federal regulation.