Establishes Chapter 93M of Massachusetts General Laws, imposing accountability and transparency obligations on developers and deployers of AI systems, with heightened requirements for high-risk AI systems used in consequential decisions (employment, housing, credit, healthcare, insurance, education, and government services). Developers must exercise reasonable care to mitigate algorithmic discrimination, provide documentation to deployers, notify the Attorney General of discrimination risks within 90 days, and publish a plain-language public summary. Deployers of high-risk systems must maintain a NIST-aligned risk management program, conduct annual impact assessments, notify consumers of AI-driven consequential decisions, and provide appeal mechanisms. Corporations using AI for consumer targeting or behavioral influence face additional disclosure requirements. Enforcement is exclusively through the Attorney General under Chapter 93A; no private right of action is created. Small businesses under 50 employees (not using proprietary training data) and low-risk AI systems are exempt.