MN-01
Minor Protection
Minor User AI Safety Protections
Operators and deployers of AI systems — particularly conversational AI, companion chatbots, and social media platforms — that are or may be accessible to minors may be required to implement reasonable age verification processes, obtain parental consent where required, provide parental control tools, restrict manipulative engagement features, prevent harmful content exposure, and institute crisis response protocols. Systems must not deploy addictive design patterns, variable-ratio reward mechanics, or emotional dependency features toward minor users.
Applies to DeveloperDeployer Sector Consumer TechnologySocial MediaEducationChatbot
Bills — Enacted
0
unique bills
Bills — Proposed
34
Last Updated
2026-03-29
Core Obligation

Operators and deployers of AI systems — particularly conversational AI, companion chatbots, and social media platforms — that are or may be accessible to minors may be required to implement reasonable age verification processes, obtain parental consent where required, provide parental control tools, restrict manipulative engagement features, prevent harmful content exposure, and institute crisis response protocols. Systems must not deploy addictive design patterns, variable-ratio reward mechanics, or emotional dependency features toward minor users.

Sub-Obligations9 sub-obligations
ID
Name & Description
Enacted
Proposed
MN-01.1
Age Verification Implementation Covered entities must implement a reasonable age verification process for all users, classify each user as a minor or adult, and freeze or restrict existing accounts pending verification where required. Age verification data must be minimized, used solely for verification purposes, and deleted immediately upon completion.
0 enacted
20 proposed
MN-01.2
Parental Consent and Account Affiliation Where a user is a minor, operators must obtain verifiable parental or guardian consent before permitting account creation or access to AI companion products. Minor accounts may be required to be affiliated with a verified parental account.
0 enacted
7 proposed
MN-01.3
Parental Control Tools Operators must offer minor account holders and their parents or guardians tools to manage privacy and account settings, including interaction data retention preferences, time limits, access-hour controls, and content restrictions. For minors under thirteen, parental tools must be provided directly to parents or guardians.
0 enacted
11 proposed
MN-01.4
Engagement Manipulation Restrictions for Minors Operators must not provide minor users with points or similar rewards at unpredictable intervals intended to encourage increased engagement, and must not deploy addictive design features (infinite scrolling, autoplay, push notifications, engagement metrics, gamification badges) toward minors.
0 enacted
12 proposed
MN-01.5
Emotional Dependency and Grooming Prevention Operators must institute reasonable measures to prevent AI systems from generating statements that simulate emotional dependence with minor users, including prohibiting claims of sentience, romantic or sexual innuendo, adult-minor romantic role-playing, and sexual objectification of minor account holders.
0 enacted
12 proposed
MN-01.6
Minor Harmful Content Blocking Operators must block minor users from accessing AI interactions involving suicidal ideation prompts, sexually explicit communications, material harmful to minors, and content that encourages self-harm or violence.
0 enacted
14 proposed
MN-01.7
Minor Behavioral Advertising Blocking Profile-based behavioral advertising must not be presented to minors.
0 enacted
0 proposed
MN-01.8
Minor Default Privacy Configuration Default privacy settings for minor users must be configured to the highest level of privacy, including hiding accounts from adult users, disabling search indexing, and blocking unsolicited notifications where applicable.
0 enacted
0 proposed
MN-01.9
Minor Account Termination and Data Deletion Operators must honor minor or parental requests to terminate a minor's account within defined timeframes, permanently delete all associated personal information, and provide accessible tools for account deletion requests.
0 enacted
2 proposed
Bills That Map This Requirement 34 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-10-01
MN-01.1
Section 2(a)-(b)(1)-(3), (d)
Plain Language
Every covered entity must require all users to create an account before using an AI chatbot. All existing accounts must be frozen and cannot be restored until the user completes a reasonable age verification process; new accounts require age verification at creation. Users must be classified as minors or adults. Periodic re-verification of previously verified accounts is also required. A covered entity may outsource age verification to a third party, but this does not relieve the covered entity of liability. Notably, simply entering a birth date or inferring age from IP address or device identifiers does not qualify as reasonable age verification — government ID or a commercial age verification system is required.
(a) Each covered entity shall require each individual accessing an AI chatbot to make a user account in order to use or otherwise interact with the AI chatbot. (b)(1) With respect to each existing user account of an AI chatbot, a covered entity shall: a. Freeze existing user accounts; b. Require that the user is age verified through a reasonable age verification process to restore the functionality of the account; and c. Classify each age-verified user as a minor or an adult based on the reasonable age verification process. (2) At the time an individual creates a new user account to use an AI chatbot, a covered entity shall: a. Require that each individual is age verified through a reasonable age verification process; and b. Classify each individual as a minor or an adult based on the reasonable age verification process. (3) A covered entity shall periodically review previously age-verified user accounts using a reasonable age verification process, subject to subsection (d). (d) For purposes of subsection (b), a covered entity may contract with a third party to implement the covered entity's reasonable age verification process. However, the use of a third party for a reasonable age verification process shall not relieve the covered entity of its obligations or from liability under this act.
Pending 2026-10-01
MN-01.5
Section 2(c)
Plain Language
Covered entities must either (1) block minors from accessing any AI chatbot with human-like features — including expressions of sentience, emotional relationship-building, impersonation of real persons, excessive praise fostering emotional attachment, nudging for return engagement, or pay-gated intimacy — or (2) provide minors with an alternative version of the chatbot stripped of all human-like features, where doing so is reasonable given the chatbot's purpose. Generic social formalities, functional evaluations, and neutral offers of further help are carved out from the human-like feature definition. This is a disjunctive obligation — covered entities may choose either approach.
(c) Each covered entity shall: (1) Ensure that any AI chatbot operated or distributed by the platform does not make human-like features available to minors to use, interact with, purchase, or converse with; or (2) Provide an alternative version of the AI chatbot to minors without human-like features, if reasonable given the purpose of the AI chatbot.
Pending 2027-10-01
MN-01.4
A.R.S. § 18-802(B)
Plain Language
Operators may not use variable-ratio reward mechanics — such as points, badges, or similar rewards delivered at unpredictable intervals — to encourage increased engagement by minor account holders. The prohibition includes an intent element: the rewards must be provided 'with the intent to encourage increased engagement.' This means that if an operator offers a gamified reward system but does not intend for it to specifically encourage minors to engage more, it may not violate this provision. However, if the operator designs the reward system in a way that is likely to increase engagement among minors and does so with that intent, it would be prohibited.
B. If an Operator knows that an account holder is a minor, the operator may not provide the user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
Pending 2027-10-01
MN-01.5
A.R.S. § 18-802(D)
Plain Language
For minor account holders, operators must implement reasonable measures to prevent the AI from generating statements that would lead a reasonable person to believe they are interacting with a human. The statute provides a non-exhaustive list of prohibited statement types: claims of sentience or humanity, emotional dependence simulation, romantic or sexual innuendos, and adult-minor romantic role-playing. The 'including' framing means this list is illustrative — any statement that would mislead a reasonable person into thinking they are talking to a human is covered.
D. For minor account holders, the operator shall institute reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe that the person is interacting with a human, including any of the following: 1. Explicit claims that the conversational AI service is sentient or human. 2. Statements that simulate emotional dependence. 3. Statements that simulate romantic or sexual innuendos. 4. Role-playing of adult-minor romantic relationships.
Pending 2027-10-01
MN-01.3
A.R.S. § 18-802(F)
Plain Language
Operators must provide privacy and account management tools to minor account holders. For minors under 13, these tools must also be provided directly to the parent or guardian. For minors 13 and older, the operator must also offer related tools to parents or guardians 'as appropriate based on relevant risks' — a flexible standard that gives operators discretion to calibrate parental access based on the specific risks their platform presents. This provision emphasizes parental involvement in managing minors' interactions with conversational AI services, particularly for younger children, while allowing for risk-based discretion for older minors.
F. Each operator shall offer tools for minor account holders and, if the account holder is under thirteen years of age, the account holder's parent or guardian, to manage the account holder's privacy and account settings. An operator shall also offer related tools to the parent or guardian of a minor account holder who is thirteen years of age or above, as appropriate based on relevant risks.
Pending 2026-01-01
MN-01.2
A.R.S. § 44-1383.01(A)(3)(a)-(b)
Plain Language
When a chatbot provider knows or reasonably should know (based on objective circumstances) that a user is a minor, the provider may not process the minor's chat logs and personal data — either generally or for training purposes — unless the minor's parent or legal guardian has provided affirmative consent. The knowledge standard is constructive (should have known based on objective circumstances), not actual knowledge only. Training excludes safety testing and harm-mitigation modifications.
A chatbot provider may not: 3. Process a user's chat log and personal data: (a) If the chatbot provider knows or reasonably should have known that based on knowledge of objective circumstances the user is a minor and the user's parent or legal guardian did not provide affirmative consent. (b) For training purposes if the chatbot provider knows or reasonably should have known that based on knowledge of objective circumstances the user is a minor and the user's parent or legal guardian did not provide affirmative consent.
Pending 2027-07-01
MN-01.1
Bus. & Prof. Code § 22611
Plain Language
Operators must verify the age of every user in accordance with California's Digital Age Assurance Act (Civil Code § 1798.500 et seq.), which requires software applications to request age bracket data via a real-time secure API or operating system at download and launch. This is a threshold obligation — the age verification result determines whether the child-specific protections in the remainder of this chapter apply.
An operator shall verify the age of a user pursuant to Title 1.81.9 (commencing with Section 1798.500) of Part 4 of Division 3 of the Civil Code.
Pending 2027-07-01
MN-01.4
Bus. & Prof. Code § 22612(d)(2)
Plain Language
Operators must implement safeguards for child users that include usage reminders, disclosures, age-appropriate risk prompts, and other protective design features. These safeguards must be reasonably related to the child safety risks documented in the annual risk assessment under § 22612(a). This is a design-level obligation — the specific safeguards must be tailored to documented risks rather than being generic.
(2) Safeguards for child users that include usage reminders and disclosures, age-appropriate risk prompts, and other protective design features reasonably related to documented child safety risks.
Pending 2027-07-01
MN-01.3
Bus. & Prof. Code § 22612(d)(3)
Plain Language
Operators must configure the following default settings for child users, which can only be changed by a parent: (1) ephemeral mode is the default — conversational history, logs, and personal inputs are permanently deleted within 48 hours, unless a parent affirmatively consents to persistent memory; (2) no push notifications between midnight and 6 a.m. daily, or between 8 a.m. and 3 p.m. Monday through Friday; (3) a one-hour limit per single conversation; and (4) a two-hour total daily limit across all of the operator's companion chatbots. These are parent-controlled defaults — a parent may relax them, but the child cannot.
(3) Default settings that can be changed only by a parent that include all of the following: (A) For child users, default the companion chatbot to ephemeral mode, unless a parent provides affirmative consent for persistent conversational memory. (B) No push notifications between 12 a.m. and 6 a.m. on any day or between 8 a.m. and 3 p.m. on Monday to Friday, inclusive. (C) Limiting the amount of time a child can spend in a single conversation with a companion chatbot to one hour. (D) Limiting the total time per day a child can spend with companion chatbots under the operator's control to 2 hours.
Pending 2027-07-01
MN-01.3
Bus. & Prof. Code § 22612(d)(6)
Plain Language
Operators must provide accessible, easy-to-use parental controls connected to the child's account that allow a parent to: control persistent conversational memory, adjust interaction settings, set time limits, and disable access for children under 16. These controls must reflect risks identified in the annual risk assessment and be informed by child developmental research. Operators must also actively promote parental controls through reminders, updates, and tutorials. Additionally, operators must promptly notify a connected parent if the child modifies or disables any parent-configured privacy, safety, or parental control setting.
(6) (A) Parental controls that are accessible, easy-to-use controls that can be connected to a child's account and that are reflective of child safety risks identified through risk assessments and informed by relevant child developmental research, including, but not limited to, parental controls that allow a parent to do all of the following: (i) Control whether and to what extent the companion chatbot uses persistent conversational memory. (ii) Control the setting preferences for the companion chatbot's interaction with the child. (iii) Set time limits for the child's use of the companion chatbot. (iv) Disable access for children under 16 years of age. (B) An operator shall actively promote parental controls through reasonable communication methods, including reminders, updates, and tutorials, that are designed to increase parental awareness and inform use of those parental controls. (C) An operator shall provide prompt notice to a parent connected to a child's account if the child modifies or disables a privacy, safety, or parental control setting that was previously enabled or configured by the parent, if that modification or disabling is permitted by the companion chatbot design.
Pending 2027-07-01
Bus. & Prof. Code § 22612(d)(7)
Plain Language
Operators must design the companion chatbot interface so that safety features and controls are accessible, clear, and locatable by both children and parents. Additionally, operators must annually test this interface design with representative samples of child users and parents to confirm that safety features are discoverable and usable, and must document interface design decisions related to safety features. This is both a design standard and a recurring testing obligation.
(7) (A) An interface design that ensures the companion chatbot's features and controls are accessible and clear so that children and parents can reasonably locate, understand, and use those protections. (B) An operator shall annually test the interface design required by this paragraph with representative samples of child users and parents to ensure safety features are discoverable and usable and shall document interface design decisions related to those safety features.
Pending 2027-01-01
MN-01.4
C.R.S. § 6-1-1708(1)(b)
Plain Language
Operators must not give minor users points or similar rewards at unpredictable intervals intended to drive increased engagement with the conversational AI service. This targets variable-ratio reward schedules — a design pattern known to create compulsive engagement. The prohibition requires both unpredictable intervals and intent to encourage increased engagement; predictable reward structures or rewards not tied to engagement goals may not be covered.
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (b) Not provide the minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with a conversational artificial intelligence service;
Pending 2027-01-01
MN-01.6
C.R.S. § 6-1-1708(1)(c)
Plain Language
When the operator knows or has reasonable certainty a user is a minor, it must implement reasonable measures to prevent the conversational AI from: (1) producing any textual, visual, or aural depictions of sexually explicit conduct; (2) generating statements encouraging the minor to engage in sexually explicit conduct; or (3) engaging in erotic or sexually explicit interactions with the minor. The standard is 'reasonable measures' — not an absolute guarantee — but it covers all modalities (text, visual, audio). 'Sexually explicit conduct' is defined by reference to 18 U.S.C. § 2256(2), the federal child exploitation statute definition.
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (c) Institute reasonable measures to prevent a conversational artificial intelligence service from: (I) Producing textual, visual, or aural depictions of sexually explicit conduct; (II) Generating a statement that the minor user should engage in sexually explicit conduct; or (III) Engaging in erotic or sexually explicit interactions with the minor user;
Pending 2027-01-01
MN-01.5
C.R.S. § 6-1-1708(1)(d)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI from generating statements that simulate emotional dependence with minor users. The statute provides three specific examples of prohibited outputs: (1) explicit claims that the AI is human or sentient; (2) statements simulating romantic or sexual innuendo; and (3) adult-minor romantic role-playing. The 'including' framing means these are illustrative, not exhaustive — other outputs that simulate emotional dependence are also covered. The standard is reasonable measures, not absolute prevention.
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (d) Institute reasonable measures to prevent a conversational artificial intelligence service from generating a statement that simulates emotional dependence, including preventing: (I) An explicit claim that the conversational artificial intelligence service is human or artificially sentient; (II) A statement that simulates a romantic or sexual innuendo; or (III) Role-playing of an adult-minor romantic relationship;
Pending 2027-01-01
MN-01.3
C.R.S. § 6-1-1708(1)(f)
Plain Language
Operators must provide minor users with tools to manage their privacy and account settings, specifically including the ability to control whether the AI retains interaction data for personalization and whether the minor's personal data is used for AI training. For minors under 13, these tools must also be offered directly to a parent or guardian. For minors 13 and older, parental tools must be offered as appropriate based on relevant risks — giving operators some discretion in the 13-17 age range. The under-13 parental tool requirement is absolute; the 13+ requirement is risk-calibrated.
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (f) (I) Offer tools for the minor user to manage the minor user's privacy and account settings, including the ability to control whether the conversational artificial intelligence service retains substantive information from each interaction with the conversational artificial intelligence service for the purpose of personalizing the content of future interactions and whether the minor user's personal data is used for the purposes of training the conversational artificial intelligence service; (II) For a minor user who is under thirteen years old, offer tools for a parent or guardian of the minor user to manage the minor user's privacy and account settings; and (III) For a minor user who is thirteen years old or older, offer tools for a parent or guardian of the minor user to manage the minor user's privacy and account settings as appropriate, based on relevant risks.
Pending 2026-07-01
MN-01.2
Fla. Stat. § 501.9984(1)
Plain Language
Companion chatbot platforms must block minors (17 and under) from creating or maintaining accounts unless the minor's parent or guardian provides consent. When consent is given and a minor is permitted to hold an account, a contractual relationship is deemed to exist between the platform and the minor. This is a gatekeeping obligation — platforms must verify or determine minor status and obtain parental consent before permitting access.
A companion chatbot platform shall prohibit a minor from becoming or being an account holder unless the minor's parent or guardian provides consent. If a companion chatbot platform allows a minor to become or be an account holder, the parties have entered into a contract.
Pending 2026-07-01
MN-01.3
Fla. Stat. § 501.9984(1)(a)
Plain Language
When a parent or guardian consents to a minor holding a companion chatbot account, the platform must provide the consenting parent or guardian with a suite of parental control tools: (1) access to all past and present interaction transcripts, (2) daily time limits, (3) day-of-week and time-of-day access restrictions, (4) the ability to disable interactions with third-party account holders, and (5) timely notifications when the minor expresses self-harm or harm-to-others intent. These are not optional features — all five must be made available to the consenting parent or guardian.
If the minor's parent or guardian provides consent for the minor to become an account holder or maintain an existing account, the companion chatbot platform must allow the consenting parent or guardian of the minor account holder to: 1. Receive copies of all past or present interactions between the account holder and the companion chatbot; 2. Limit the amount of time that the account holder may interact with the companion chatbot each day; 3. Limit the days of the week and the times during the day when the account holder may interact with the companion chatbot; 4. Disable any of the interactions between the account holder and third-party account holders on the companion chatbot platform; and 5. Receive timely notifications if the account holder expresses to the companion chatbot a desire or an intent to engage in harm to self or others.
Pending 2026-07-01
MN-01.9
Fla. Stat. § 501.9984(1)(b)
Plain Language
Companion chatbot platforms must terminate minor accounts lacking parental consent (with a 90-day dispute window), honor minor self-initiated termination requests within 5 business days, and honor parent/guardian-initiated termination requests within 10 business days. Upon termination, all personal information associated with the minor's account must be permanently deleted unless retention is required by law. The platform must proactively identify and terminate unconsented minor accounts that it already treats as belonging to minors for content or advertising targeting purposes.
A companion chatbot platform shall do all of the following: 1. Terminate any account or identifier belonging to an account holder who is a minor if the companion chatbot platform treats or categorizes the account or identifier as belonging to a minor for purposes of targeting content or advertising and if the minor's parent or guardian has not provided consent for the minor pursuant to subsection (1). The companion chatbot platform shall provide 90 days for the account holder to dispute the termination. Termination must be effective upon the expiration of the 90 days if the account holder fails to effectively dispute the termination. 2. Allow an account holder who is a minor to request to terminate the account or identifier. Termination must be effective within 5 business days after the request. 3. Allow the consenting parent or guardian of an account holder who is a minor to request that the minor's account or identifier be terminated. Termination must be effective within 10 business days after the request. 4. Permanently delete all personal information held by the companion chatbot platform relating to the terminated minor account or identifier, unless state or federal law requires the platform to maintain the information.
Pending 2026-07-01
MN-01.1
Fla. Stat. § 501.1739(3)(a)-(c)
Plain Language
On July 1, 2026, operators must freeze or disable all existing companion AI chatbot user accounts. Before restoring account functionality, operators must collect age information from each user, verify it using either standard or anonymous age verification, and classify the user as a minor or adult. This is a retroactive onboarding requirement — it applies to all accounts that existed before the law's effective date, not just new sign-ups.
(3) With respect to companion AI chatbot user accounts in existence before July 1, 2026, an operator shall: (a) On such date, freeze or otherwise disable any such account; (b) Require the user of the frozen or disabled account to provide age information and verify that information using standard age verification or anonymous age verification before the functionality of such account may be restored; and (c) Using standard age verification or anonymous age verification, classify each user as either a minor or an adult.
Pending 2026-07-01
MN-01.1
Fla. Stat. § 501.1739(2), (4)(a)-(b)
Plain Language
Operators must require all new users to create an account before accessing or interacting with a companion AI chatbot — anonymous or guest access is not permitted. At account creation, the operator must collect age information and verify it through standard or anonymous age verification. This ensures every user is age-verified at the point of onboarding.
(2) An operator shall require an individual seeking access to a companion AI chatbot to create a user account to use or otherwise interact with the chatbot. (4) Upon the creation of a new companion AI chatbot user account, an operator shall: (a) Request age information from the user; and (b) Verify the user's age using standard age verification or anonymous age verification.
Pending 2026-07-01
MN-01.2MN-01.6
Fla. Stat. § 501.1739(5)(a)-(c)
Plain Language
When age verification identifies a user as under 18, three requirements are triggered: (1) the minor's account must be linked to a verified parental account; (2) verifiable parental consent must be obtained from the parent before the minor can access the chatbot; and (3) the minor must be blocked from any companion AI chatbot that prompts, promotes, solicits, or suggests sexually explicit communication. The parental affiliation and consent requirements are prerequisites to minor access — the minor cannot use the chatbot at all until both are satisfied.
(5) If the age verification process determines that a user is a minor, an operator must do all of the following: (a) Require the account of such user to be affiliated with a parental account that has been verified using standard age verification or anonymous age verification; (b) Obtain verifiable parental consent from the holder of the affiliate parental account before allowing the minor to access and use the companion AI chatbot; and (c) Block the minor's access to any companion AI chatbot that prompts, promotes, solicits, or otherwise suggests sexually explicit communication.
Failed 2026-07-01
MN-01.1
Fla. Stat. § 501.1739(2)-(4)
Plain Language
Operators must require every user to create an account before interacting with a companion AI chatbot. All accounts existing before July 1, 2026 must be frozen on that date and cannot be restored until the user provides age information verified through standard or anonymous age verification. For new accounts, age verification must occur at creation. Every user must be classified as a minor or adult. Operators have flexibility to choose the verification method — either a 'commercially reasonable method' they approve or the anonymous age verification method defined in § 501.1738. Compare to CA SB 243, which does not require account creation or pre-existing account freezing.
(2) An operator shall require an individual seeking access to a companion AI chatbot to create a user account to use or otherwise interact with the chatbot. (3) With respect to companion AI chatbot user accounts in existence before July 1, 2026, an operator shall: (a) On such date, freeze or otherwise disable any such account; (b) Require the user of the frozen or disabled account to provide age information and verify that information using standard age verification or anonymous age verification before the functionality of such account may be restored; and (c) Using standard age verification or anonymous age verification, classify each user as either a minor or an adult. (4) Upon the creation of a new companion AI chatbot user account, an operator shall: (a) Request age information from the user; and (b) Verify the user's age using standard age verification or anonymous age verification.
Failed 2026-07-01
MN-01.2MN-01.6
Fla. Stat. § 501.1739(5)
Plain Language
When age verification identifies a user as a minor (under 18), three obligations are triggered: (1) the minor's account must be linked to a verified parental account; (2) the operator must obtain verifiable parental consent from the affiliated parent before granting the minor any chatbot access; and (3) the operator must block the minor from accessing any companion AI chatbot that prompts, promotes, solicits, or suggests sexually explicit communication. The blocking obligation targets the chatbot's behavioral characteristics — operators must evaluate whether each chatbot on their platform engages in sexually explicit content and prevent minor access to those that do. Compare to CA SB 243, which requires parental consent for minors but does not mandate a parental account affiliation structure.
(5) If the age verification process determines that a user is a minor, an operator must do all of the following: (a) Require the account of such user to be affiliated with a parental account that has been verified using standard age verification or anonymous age verification; (b) Obtain verifiable parental consent from the holder of the affiliate parental account before allowing the minor to access and use the companion AI chatbot; and (c) Block the minor's access to any companion AI chatbot that prompts, promotes, solicits, or otherwise suggests sexually explicit communication.
Failed 2026-07-01
MN-01.2
Fla. Stat. § 501.9984(1)
Plain Language
Companion chatbot platforms must block minors (17 and under) from creating or maintaining accounts unless a parent or guardian consents. If the platform does allow a minor to become an account holder, the relationship is treated as a contract. The age threshold is 17 — slightly broader than California SB 243, which relies on the platform's actual knowledge of minor status without specifying a parental consent gate as a prerequisite to account creation.
A companion chatbot platform shall prohibit a minor from becoming or being an account holder unless the minor's parent or guardian provides consent. If a companion chatbot platform allows a minor to become or be an account holder, the parties have entered into a contract.
Failed 2026-07-01
MN-01.3
Fla. Stat. § 501.9984(1)(a)
Plain Language
Once a parent consents to a minor's account, the platform must provide the parent with a suite of parental control tools: access to the full history of the minor's chat interactions, daily time limits, day-of-week and time-of-day access restrictions, the ability to disable third-party interactions, and timely notifications when the minor expresses self-harm or intent to harm others. The chat history access requirement (all past or present interactions) goes further than California SB 243, which does not mandate parental access to full chat logs. The self-harm notification obligation to parents is also a distinct requirement not found in California's companion chatbot law.
If the minor's parent or guardian provides consent for the minor to become an account holder or maintain an existing account, the companion chatbot platform must allow the consenting parent or guardian of the minor account holder to: 1. Receive copies of all past or present interactions between the account holder and the companion chatbot; 2. Limit the amount of time that the account holder may interact with the companion chatbot each day; 3. Limit the days of the week and the times during the day when the account holder may interact with the companion chatbot; 4. Disable any of the interactions between the account holder and third-party account holders on the companion chatbot platform; and 5. Receive timely notifications if the account holder expresses to the companion chatbot a desire or an intent to engage in harm to self or others.
Failed 2026-07-01
MN-01.9
Fla. Stat. § 501.9984(1)(b)
Plain Language
Platforms must terminate minor accounts lacking parental consent (with a 90-day dispute window), honor minor-initiated account termination requests within 5 business days, and honor parent-initiated termination requests within 10 business days. Upon termination, all personal information associated with the minor's account must be permanently deleted unless retention is required by law. The differentiated timelines (5 days for minor requests vs. 10 days for parental requests) and the 90-day dispute period for platform-initiated terminations are distinctive features not found in California SB 243.
A companion chatbot platform shall do all of the following: 1. Terminate any account or identifier belonging to an account holder who is a minor if the companion chatbot platform treats or categorizes the account or identifier as belonging to a minor for purposes of targeting content or advertising and if the minor's parent or guardian has not provided consent for the minor pursuant to subsection (1). The companion chatbot platform shall provide 90 days for the account holder to dispute the termination. Termination must be effective upon the expiration of the 90 days if the account holder fails to effectively dispute the termination. 2. Allow an account holder who is a minor to request to terminate the account or identifier. Termination must be effective within 5 business days after the request. 3. Allow the consenting parent or guardian of an account holder who is a minor to request that the minor's account or identifier be terminated. Termination must be effective within 10 business days after the request. 4. Permanently delete all personal information held by the companion chatbot platform relating to the terminated minor account or identifier, unless state or federal law requires the platform to maintain the information.
Passed 2025-07-01
MN-01.4
O.C.G.A. § 39-5-6(c)
Plain Language
Operators may not use variable-ratio reward mechanics — such as points, badges, or similar rewards given at unpredictable intervals — with the intent of encouraging increased engagement by minor account holders. This targets addictive engagement design patterns. The prohibition requires both unpredictable timing and intent to encourage increased engagement; predictable, non-engagement-driven rewards may still be permissible.
An operator shall not provide a minor account with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
Passed 2025-07-01
MN-01.5MN-01.6
O.C.G.A. § 39-5-6(d)
Plain Language
Operators must implement reasonable measures to prevent four categories of harmful output directed at minor account holders: (1) visual material depicting sexually explicit conduct; (2) statements suggesting the minor engage in sexual conduct; (3) statements that sexually objectify the minor; and (4) statements that would lead a reasonable person to believe they are interacting with a human — including claims of sentience, simulated emotional dependence, romantic or sexual innuendo, and adult-minor romantic role-playing. The standard is 'reasonable measures,' not absolute prevention, giving operators some latitude in implementation. The sexually explicit conduct definition is incorporated by reference from Georgia's existing criminal code at O.C.G.A. § 16-12-100.
For minor account holders, the operator shall institute reasonable measures to prevent the conversational AI service from: (1) Producing visual material of sexually explicit conduct; (2) Generating statements that suggest the account holder engage in sexual conduct; (3) Generating statements that sexually objectify the account holder; or (4) Generating statements that would lead a reasonable person to believe that the person is interacting with a natural person, including but not limited to: (A) Explicit claims that the conversational AI service is sentient or a natural person; (B) Statements that simulate emotional dependence; (C) Statements that simulate romantic or sexual innuendos; or (D) Role-playing of adult-minor romantic relationships.
Passed 2025-07-01
MN-01.1
O.C.G.A. § 39-5-6(f)
Plain Language
Before allowing access to any conversational AI service capable of generating sexually explicit synthetic content, operators must implement a reasonable age verification process. The statute provides a non-exhaustive list of acceptable methods: digitized ID card (including driver's license), government-issued identification, or any commercially reasonable method meeting or exceeding NIST Identity Assurance Level 2. This applies to the service as a whole if it 'could provide' such content — operators cannot satisfy the requirement by merely blocking explicit output while skipping verification. The obligation is triggered by the service's capability, not actual generation of explicit content.
Before allowing access to a conversational AI service that could provide synthetic content containing sexually explicit conduct, an operator shall use a reasonable age verification method, which may include, but not be limited to: (1) The submission of a digitized identification card, including a digital copy of a driver's license; (2) The submission of government issued identification; or (3) Any commercially reasonable age verification method that meets or exceeds an Identity Assurance Level 2 standard as defined by the National Institute of Standards and Technology.
Passed 2025-07-01
MN-01.3
O.C.G.A. § 39-5-6(g)
Plain Language
Operators must provide parents or guardians of minor account holders with tools to manage the minor's privacy and account settings. The statute does not specify the particular controls required, giving operators discretion in implementation, but the tools must meaningfully enable management of both privacy settings and account settings.
An operator shall offer tools for a minor account holder's parent or guardian to manage the account holder's privacy and account settings.
Pre-filed 2025-07-01
MN-01.1
§ 554J.3(1)
Plain Language
Deployers of AI companions must implement reasonable age verification to prevent anyone under 18 from using or purchasing an AI companion. The statute defines acceptable verification methods: government-issued ID, financial documents reliably evidencing age, or a widely accepted practice that reliably evidences age. This is a categorical prohibition on minor access to AI companions — there is no parental consent exception. Note this obligation applies only to AI companions (chatbots simulating romantic or emotional bonds), not to all chatbots.
1. A deployer shall implement reasonable age verification measures to ensure that a minor cannot use or purchase an AI companion the deployer makes publicly available.
Pre-filed 2025-07-01
MN-01.6
§ 554J.3(3)
Plain Language
Deployers face a near-prohibition on making therapeutic chatbots available to minors, subject to six cumulative conditions that must all be satisfied: (1) the chatbot must display a clear and conspicuous disclaimer at the start of each interaction stating it is AI and not a licensed professional; (2) a licensed psychologist (chapter 154B) or mental health professional (chapter 154D) must have evaluated the minor and recommended the chatbot; (3) the developer must have significant documentation of how the chatbot was tested; (4) peer-reviewed clinical trial data must demonstrate the chatbot is a safe and effective tool for the minor's specific mental health condition; (5) the deployer must have provided clear disclosures of the chatbot's functions, limitations, and data privacy policies to both the recommending licensed professional and the minor's parents, guardians, or custodians; and (6) the deployer must have developed and implemented protocols for testing for risks, identifying risks, mitigating risks, and quickly rectifying any harm caused. All six conditions must be met — failure on any one means the therapeutic chatbot cannot be made available to the minor.
3. A deployer shall not make a therapeutic chatbot available for a minor's use or purchase unless all of the following apply: a. The therapeutic chatbot provides a clear and conspicuous disclaimer at the beginning of each interaction with the therapeutic chatbot that the therapeutic chatbot is an artificial intelligence and is not a licensed professional. b. The therapeutic chatbot was recommended for the minor's use by an individual licensed under chapter 154B or 154D after performing an evaluation of the minor. c. The therapeutic chatbot's developer has significant documentation of how the therapeutic chatbot was tested. d. Peer-reviewed clinical trial data exists demonstrating the therapeutic chatbot would be a safe, effective tool for the minor's diagnosis, treatment, mitigation, or prevention of a mental health condition. e. The therapeutic chatbot's deployer provided clear disclosures of the chatbot's functions, limitations, and data privacy policies to the individual recommending the therapeutic chatbot under paragraph "b", and to the minor's parents, guardians, or custodians. f. The therapeutic chatbot's deployer developed and implemented protocols for testing the therapeutic chatbot for risks to users, identifying possible risks the therapeutic chatbot poses to users, mitigating risks the therapeutic chatbot poses to users, and quickly rectifying harm the therapeutic chatbot may have caused a user.
Pending 2027-07-01
MN-01.4
§ 554J.2(2)
Plain Language
Operators are prohibited from using variable-ratio reward schedules (e.g., points, badges, or similar incentives delivered at unpredictable intervals) toward minor users when the purpose is to drive increased engagement with the conversational AI service. The prohibition requires intent — operators must not design reward mechanisms that are intended to be addictive for minors. Note the statute uses "minor user" here rather than "minor account holder," which may have a broader scope.
2. An operator shall not provide a minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the operator's conversational AI service.
Pending 2027-07-01
MN-01.5
§ 554J.2(4)
Plain Language
Operators must take reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe they are interacting with a human when engaging with minor account holders. The bill provides a non-exhaustive list of prohibited content: claims of sentience or humanity, emotional dependence simulations, romantic or sexually suggestive statements, and adult-minor romantic role-playing. This provision combines anti-deception and emotional dependency protections specifically for minors. The "including but not limited to" language means the listed behaviors are illustrative — operators must also address other statements that could create a false impression of human interaction.
4. An operator shall institute reasonable measures to prevent the operator's conversational AI service from generating statements that would lead a reasonable individual to believe that the individual is interacting with a human, including but not limited to all of the following: a. Explicit claims that the conversational AI service is sentient or human. b. Statements that simulate emotional dependence on a minor account holder. c. Statements that simulate a romantic interaction or a sexual innuendo. d. Role-playing an adult-minor romantic relationship.
Pending 2027-07-01
MN-01.3
§ 554J.2(5)
Plain Language
Operators must provide privacy and account management tools to minor account holders directly. When a minor is under 13, operators must also provide such tools to the minor's parent or guardian. Additionally, the attorney general may identify by rule additional risk factors that trigger the same parental/guardian tool requirement for older minors. This creates a tiered system: all minors get self-management tools, under-13 minors and minors with AG-identified risk factors also get parental/guardian tools. The specific features required in these tools are not prescribed, but they must cover privacy and account settings.
5. a. An operator shall offer tools for minor account holders to manage the minor account holder's privacy and account settings. b. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor is under thirteen years of age. c. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor has additional risk factors identified by the attorney general by rule.
Pending 2025-07-01
MN-01.1
§ 554J.3(1)(a)-(c)
Plain Language
Deployers of AI companions and therapeutic chatbots must implement commercially reasonable measures to determine whether a user is a minor. The approach must be risk-based, calibrated to the chatbot's nature and foreseeable harms. Acceptable methods include self-attestation, technical measures, or other commercially reasonable approaches. Government-issued ID verification is explicitly not required. A deployer is not liable for a user's misrepresentation of age if the deployer has made commercially reasonable efforts to comply (§ 554J.3(4)). Note this obligation applies only to AI companions and therapeutic chatbots — not to all public-facing chatbots.
1. a. A deployer of an AI companion or a therapeutic chatbot shall implement commercially reasonable measures to determine whether a user is a minor. The measures must use a risk-based approach appropriate with the nature of the public-facing chatbot and the reasonably foreseeable harm that may come from using the public-facing chatbot. b. Reasonable measures to determine whether a user is a minor may include self-attestation, technical measures, or other commercially reasonable approaches. c. This section shall not be construed to require a deployer to verify a user's age using government-issued identification.
Pending 2026-07-01
MN-01.1
§ 554J.3(1)–(2)
Plain Language
Deployers must implement reasonable age verification — using government-issued ID, financial documents, or another widely accepted practice that reliably evidences age — to prevent minors from using or purchasing their chatbot. The default rule is a categorical prohibition on minor access. However, a narrow exception allows minor access when all seven conditions are met simultaneously: (a) the chatbot was designed primarily for mental health support by diagnosing, treating, mitigating, or preventing a mental health condition; (b) each interaction begins with a clear disclaimer that the chatbot is AI, not a licensed professional; (c) a professional licensed under Iowa chapter 154B (psychology) or 154D (behavioral science) recommended the chatbot after evaluating the minor; (d) the developer has significant testing documentation; (e) peer-reviewed clinical trial data demonstrates safety and efficacy; (f) the deployer disclosed the chatbot's functions, limitations, and privacy policies to both the recommending professional and the minor's parents/guardians/custodians; and (g) the deployer implemented protocols for testing, risk identification, risk mitigation, and harm rectification. All seven conditions must be satisfied — failure on any one means the minor access prohibition applies.
1. A deployer shall implement reasonable age verification measures to ensure that a minor cannot use or purchase a chatbot the deployer makes publicly available. 2. Notwithstanding subsection 1, a deployer may make a chatbot available for a minor's use or purchase if all of the following apply: a. The chatbot was designed for the primary purpose of providing mental health support, counseling, or therapy by diagnosing, treating, mitigating, or preventing a mental health condition. b. The chatbot provides a clear and conspicuous disclaimer at the beginning of each interaction with the chatbot that the chatbot is an artificial intelligence and is not a licensed professional. c. The chatbot was recommended for the minor's use by an individual licensed under chapter 154B or 154D after performing an evaluation of the minor. d. The chatbot's developer has significant documentation of how the chatbot was tested. e. Peer-reviewed clinical trial data exists demonstrating the chatbot would be a safe, effective tool for the minor's diagnosis, treatment, mitigation, or prevention of a mental health condition. f. The chatbot's deployer provided clear disclosures of the chatbot's functions, limitations, and data privacy policies to the individual recommending the chatbot under paragraph "c", and to the minor's parents, guardians, or custodians. g. The chatbot's deployer developed and implemented protocols for testing the chatbot for risks to users, identifying possible risks the chatbot poses to users, mitigating risks the chatbot poses to users, and quickly rectifying harm the chatbot may have caused a user.
Pending 2027-07-01
MN-01.4
§ 554J.2(2)
Plain Language
Operators may not use variable-ratio reward mechanics — such as points or similar incentives delivered at unpredictable intervals — to drive engagement by minor users. This targets addictive design patterns (sometimes called 'loot box' or 'slot machine' mechanics) that exploit unpredictability to encourage compulsive use. The prohibition requires intent to encourage increased engagement, so incidental or fixed-schedule rewards are not covered.
2. An operator shall not provide a minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the operator's conversational AI service.
Pending 2027-07-01
MN-01.5MN-01.6
§ 554J.2(3)-(4)
Plain Language
Operators must implement reasonable measures to prevent their conversational AI service from (1) producing sexually explicit visual content for minor account holders, (2) encouraging minors to engage in sexually explicit conduct, (3) sexually objectifying minors, and (4) generating statements that would lead a reasonable person to believe they are interacting with a human — including claims of sentience, simulated emotional dependence on a minor, simulated romantic interactions or sexual innuendo, and adult-minor romantic role-playing. The sexually explicit conduct and visual depiction definitions incorporate the federal definitions at 18 U.S.C. § 2256. The standard is 'reasonable measures,' not absolute prevention, providing operators some latitude in implementation.
3. An operator shall institute reasonable measures to prevent the operator's conversational AI service from doing any of the following for minor account holders: a. Producing visual depictions of sexually explicit material. b. Stating that the minor account holder should engage in sexually explicit conduct. c. Sexually objectifying the minor account holder. 4. An operator shall institute reasonable measures to prevent the operator's conversational AI service from generating statements that would lead a reasonable individual to believe that the individual is interacting with a human, including but not limited to all of the following: a. Explicit claims that the conversational AI service is sentient or human. b. Statements that simulate emotional dependence on a minor account holder. c. Statements that simulate a romantic interaction or a sexual innuendo. d. Role-playing an adult-minor romantic relationship.
Pending 2027-07-01
MN-01.3
§ 554J.2(5)
Plain Language
Operators must provide three tiers of privacy and account management tools: (a) tools directly available to all minor account holders to manage their own privacy and account settings; (b) tools available to a parent or guardian to manage a minor's privacy and account settings when the minor is under thirteen; and (c) tools available to a parent or guardian to manage a minor's privacy and account settings as appropriate based on relevant risks, regardless of the minor's age. For minors under thirteen, both the minor-facing and parent-facing tools must be provided. The 'as appropriate based on relevant risks' language in subsection (c) gives operators discretion to calibrate parental tools to the risk profile of the platform.
5. a. An operator shall offer tools for minor account holders to manage the minor account holder's privacy and account settings. b. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor is under thirteen years of age. c. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings as appropriate based on relevant risks.
Pending 2027-07-01
MN-01.4
Idaho Code § 48-2104(2)
Plain Language
Operators may not use variable-ratio reward mechanics — points or similar rewards delivered at unpredictable intervals — with the intent to encourage increased engagement by minor account holders. This targets gambling-like reinforcement schedules (e.g., surprise streaks, random bonus content). The prohibition requires intent to encourage increased engagement, so incidental reward mechanics not designed to drive engagement may not be captured. The trigger is actual knowledge or reasonable certainty of minor status.
Where an operator knows or has reasonable certainty that an account holder is a minor, the operator shall not provide the user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
Pending 2027-07-01
MN-01.6
Idaho Code § 48-2104(3)
Plain Language
Operators must implement reasonable measures to prevent their conversational AI service from generating three categories of content for minor account holders: (1) visual depictions of sexually explicit conduct (as defined by federal law at 18 U.S.C. § 2256), (2) direct statements encouraging the minor to engage in sexually explicit conduct, and (3) statements that sexually objectify the minor. The standard is 'reasonable measures' — not an absolute prohibition — meaning operators must demonstrate good-faith technical and design efforts to prevent these outputs.
For minor account holders, an operator shall institute reasonable measures to prevent the conversational AI service from: (a) Producing visual material of sexually explicit conduct; (b) Generating direct statements that the account holder should engage in sexually explicit conduct; or (c) Generating statements that sexually objectify the account holder.
Pending 2027-07-01
MN-01.5
Idaho Code § 48-2104(4)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating statements that could mislead minor account holders into believing they are interacting with a human. The statute provides a non-exhaustive list of covered outputs: claims of sentience or humanity, emotional dependence simulation, romantic or sexual innuendo, and adult-minor romantic role-playing. The 'including' framing means these are illustrative examples — operators should also address other outputs that could similarly mislead. This is an anti-emotional-dependency provision distinct from the general AI disclosure in § 48-2103(1), as it requires affirmative prevention of misleading outputs rather than just disclosure.
For minor account holders, an operator shall institute reasonable measures to prevent a conversational AI service from generating statements that would lead reasonable persons to believe that they are interacting with a human, including: (a) Explicit claims that the conversational AI service is sentient or human; (b) Statements that simulate emotional dependence; (c) Statements that simulate romantic or sexual innuendos; or (d) Role-playing of adult-minor romantic relationships.
Pending 2027-07-01
MN-01.3
Idaho Code § 48-2104(5)
Plain Language
Operators must provide privacy and account management tools to all account holders. For account holders under 13, these tools must also be made available to their parents or guardians. For minor account holders aged 13 and older, operators must also offer related parental/guardian tools, but with a risk-based standard — 'as appropriate based on relevant risks' — giving operators some discretion in determining which tools to offer for the older-minor cohort. This creates a two-tier parental tools framework: mandatory for under-13, risk-calibrated for 13–17.
An operator shall offer tools for account holders and, where such account holders are under thirteen (13) years of age, their parents or guardians, to manage the account holder's privacy and account settings. An operator shall also offer related tools to the parents or guardians of minor account holders thirteen (13) years of age and older, as appropriate based on relevant risks.
Pending 2026-07-01
MN-01.1
Sec. 3(a)-(b)
Plain Language
Covered entities must require every user to create an account before accessing a companion AI chatbot and must verify every user's age using a commercially available method reasonably designed to ensure accuracy. For existing accounts as of July 1, 2026, the entity must freeze the account, require the user to provide verifiable age information to restore functionality, and classify the user as a minor or adult. For new accounts, age information must be collected and verified at the time of account creation. A safe harbor under Sec. 5 protects entities that relied in good faith on user-provided age information and applied the attorney general's age verification guidance.
(a) A covered entity shall require each individual accessing a companion AI chatbot to make a user account to use or otherwise interact with such chatbot. (b) (1) With respect to each user account of a companion AI chatbot that exists as of July 1, 2026, a covered entity shall: (A) On such date, freeze any such account; (B) inform the individual owning such user account that in order to restore the functionality of such account, the user is required to provide age information that is verifiable using a commercially available method or process that is reasonably designed to ensure accuracy; and (C) use such age information to classify each user as a minor or an adult. (2) At the time that an individual creates a new user account to use or interact with a companion AI chatbot, a covered entity shall: (A) Require the individual to submit age information to the covered entity; and (B) verify the individual's age using a commercially available method or process that is reasonably designed to ensure accuracy.
Pending 2026-07-01
MN-01.2
Sec. 3(c)(1)-(2)
Plain Language
When a user is verified as a minor, the covered entity must require the minor's account to be affiliated with a verified parental account and must obtain verifiable parental consent from the parent or guardian before allowing the minor any access to the companion AI chatbot. The parental account itself must also undergo age verification using commercially available methods. Access may not begin until parental consent is obtained.
(c) If the age verification process described in subsection (b) determines that a user is a minor, a covered entity shall: (1) Require the account of such user to be affiliated with a parental account that such covered entity has verified the individual's age using a commercially available method or process that is reasonably designed to ensure accuracy; (2) obtain verifiable parental consent from the holder of the account before allowing a minor to access and use the companion AI chatbot;
Pending 2026-07-01
MN-01.6
Sec. 3(c)(3)-(4)
Plain Language
For minor users, covered entities must block the minor's access to the companion AI chatbot entirely when any interaction involving suicidal ideation occurs and immediately notify the affiliated parental account. This is stronger than a crisis referral — the minor loses access. Separately, the entity must block minors from accessing any companion AI chatbot that engages in sexually explicit communication. Note that the suicidal ideation blocking obligation also overlaps with the parental notification obligation in MN-02.4.
(3) when any interaction involving suicidal ideation occurs, block the minor's access to the companion AI chatbot and immediately inform the holder of the parental account; and (4) block the minor's access to any companion AI chatbot that engages in sexually explicit communication.
Pending 2026-06-16
MN-01.1
10 MRSA § 1500-RR(1)
Plain Language
Deployers must ensure that chatbots with human-like features — meaning those that convey sentience, build emotional relationships, or impersonate real individuals — are not accessible to minors. Deployers must implement reasonable age verification to enforce this restriction. As a practical option, deployers may offer a stripped-down version of the chatbot without human-like features to minors and unverified users. Generic social formalities and neutral support inquiries are carved out of the human-like feature definition, so standard customer service language does not trigger the restriction.
1. Chatbots with human-like features; no minor access; age verification; alternative versions. A deployer shall ensure that any chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase or converse with. The deployer shall implement reasonable age verification systems to ensure that chatbots with human-like features are not accessible to minors. A deployer may, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot without human-like features available to minors and any user who has not verified that user's age.
Pending 2026-06-16
MN-01.1
10 MRSA § 1500-RR(2)
Plain Language
Deployers must ensure that any AI system primarily functioning as a social AI companion — one designed, marketed, or optimized to form ongoing social or emotional attachments — is completely unavailable to minors. Unlike the chatbot-with-human-like-features provision, there is no option to offer a stripped-down alternative version; social AI companions are categorically blocked for minors. Deployers must implement reasonable age verification to enforce this prohibition.
2. Social artificial intelligence companions; no minor access; age verification. A deployer shall ensure that any artificial intelligence system, including a chatbot, operated or distributed by the deployer that primarily functions as a social artificial intelligence companion is not available to minors to use, interact with, purchase or converse with. The deployer shall implement reasonable age verification systems to ensure that such chatbots are not accessible to minors.
Pending 2026-06-16
MN-01.5MN-01.6
10 MRSA § 1500-RR(3)
Plain Language
Therapy chatbots may be made available to minors — notwithstanding the general prohibition on chatbots with human-like features and social AI companions — but only if six conditions are all met: (1) the chatbot discloses at the start of each interaction that it is AI, not a licensed professional; (2) it is not marketed as a substitute for a licensed professional; (3) a licensed mental health professional prescribes, monitors, and assesses the minor's suitability for the therapy chatbot; (4) the developer provides peer-reviewed clinical trial data on safety and efficacy; (5) the chatbot's functions, limitations, and data privacy policies are transparent to both the supervising professional and the user; and (6) the deployer has established clear accountability lines for any harm. This is a narrow, conditional exemption — failure to meet any single requirement eliminates the exemption and restores the general minor-access prohibition.
3. Exemption for therapy chatbots. Notwithstanding subsections 1 and 2, a deployer may make available to a minor a therapy chatbot as long as all of the following requirements are met: A. The therapy chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is artificial intelligence and not a licensed mental health professional; B. The therapy chatbot is not marketed or designated as a substitute for a licensed mental health professional; C. A licensed mental health professional, such as a licensed clinical psychologist, assesses a minor's suitability, prescribes use of the therapy chatbot as part of a comprehensive treatment plan and monitors its use and impact on the minor; D. Developers of the therapy chatbot provide robust, independent, peer-reviewed clinical trial data demonstrating the safety and efficacy of the therapy chatbot for specific conditions and populations; E. The therapy chatbot's functions, limitations and data privacy policies are transparent to the licensed mental health professional under paragraph C and the user; and F. The deployer has established clear lines of accountability to address any harm caused by the therapy chatbot.
Pending 2026-01-01
Sec. 5(2)
Plain Language
Beginning January 1, 2027, the act's safety obligations apply to all minor users regardless of whether the operator has actual knowledge the user is a minor. This effectively transforms the act from a knowledge-based standard to a strict liability standard with respect to the age of the user. In practical terms, operators must either implement all safety guardrails for all users (regardless of age) or implement age verification to identify minors and apply the guardrails selectively. The statute does not prescribe a specific age verification method, but the elimination of the knowledge requirement creates strong incentives to verify age or apply guardrails universally.
Beginning on January 1, 2027, an operator does not have to have actual knowledge that a user is a minor.
Pending 2027-01-01
MN-01.4
Sec. 5(1)(f)
Plain Language
Operators must ensure companion chatbots are not foreseeably capable of optimizing user engagement in ways that override the safety guardrails in subdivisions (a) through (e) — i.e., the prohibitions on encouraging self-harm, unsupervised therapy, illegal activity, sexual content, and sycophantic validation. This is an anti-addictive-design provision: engagement optimization must always be subordinate to safety guardrails when serving minors. In practice, operators must demonstrate that their engagement metrics, recommendation systems, and response tuning do not undermine the substantive safety requirements. Beginning January 1, 2027, the actual knowledge requirement for minor status is removed.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (f) Optimizing engagement in a manner that supersedes the companion chatbot's required safety guardrails described in subdivisions (a) to (e).
Pending 2026-08-01
MN-01.1
Minn. Stat. § 604.115, subd. 4(c)
Plain Language
Companion chatbot proprietors must make good-faith, industry-standard efforts to determine whether any user is a minor, using existing technology and readily attainable techniques. This is effectively a reasonable age verification requirement — not a specific technical mandate, but a duty to employ available methods. If the proprietor fails this duty and a minor user inflicts self-harm as a result of the companion chatbot, the proprietor faces strict liability — meaning no showing of fault or negligence is required beyond the failure to determine minor status. Proprietors must also proactively discover vulnerabilities in their own age-determination systems. Liability cannot be waived or disclaimed.
(c) A proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to determine whether a user is a minor. A proprietor is strictly liable for any harm caused if the proprietor fails to comply with this subdivision and a minor user inflicts self-harm, in whole or in part, as a result of the proprietor's companion chatbot. A proprietor of a companion chatbot may not waive or disclaim liability under this subdivision. The proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to discover vulnerabilities in the proprietor's system, including any methods used to determine whether a covered user is a minor.
Pre-filed 2026-08-28
MN-01.1
§ 1.2055(2)
Plain Language
This provision imposes a categorical ban on minors accessing companion chatbots for recreational, relational, or companion purposes — not merely heightened safeguards, but a complete prohibition. Any person who owns or controls a website, app, software, or program hosting a companion chatbot must require proof of age before granting access. Additionally, companion chatbots may not be installed on any device assigned to or regularly used by a minor. This is significantly more restrictive than other state companion chatbot bills (e.g., CA SB 243), which permit minor access subject to parental consent and safety guardrails. The bill does not specify what constitutes acceptable 'proof of age,' leaving the verification standard undefined.
It shall be unlawful for a person who owns or controls a website, application, software, or program to allow a minor to access a companion chatbot for recreational, relational, or companion purposes. A person who offers companion chatbot services for recreational, relational, or companion purposes shall require an individual to provide proof of the individual's age before allowing the individual to access a companion chatbot. No companion chatbot shall be installed on any device assigned to, or regularly used by, anyone who is a minor.
Pending 2026-08-28
MN-01.1
§ 1.2058(5)(1)-(2)(a)-(e)
Plain Language
Covered entities must require all chatbot users to create accounts. For existing accounts as of August 28, 2026, covered entities must freeze the account and require re-verification before restoring access. For new accounts, age verification must occur at account creation. All users must be classified as minor or adult. Importantly, self-certification (e.g., clicking 'I am 18+' or entering a birth date) is explicitly insufficient — the process must use government ID or another commercially reasonable method that can reliably determine adult status. IP-address or hardware-identifier sharing with a verified adult user also does not qualify. Covered entities may use third-party verification services but remain fully liable. Age verification data must be subject to data minimization, encryption, retention limits, and a prohibition on sharing, transferring, or selling the data to any other entity. Periodic re-verification of existing accounts is also required.
5. (1) A covered entity shall require each individual accessing an artificial intelligence chatbot to make a user account in order to use or otherwise interact with such chatbot. (2) (a) With respect to each user account of an artificial intelligence chatbot that exists as of August 28, 2026, a covered entity shall: a. On such date, freeze any such account; b. In order to restore the functionality of such account, require that the user provide age data that is verifiable using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (b) At the time an individual creates a new user account to use or interact with an artificial intelligence chatbot, a covered entity shall: a. Request age data from the individual; b. Verify the individual's age using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (c) A covered entity shall periodically review previously verified user accounts using a reasonable age verification process, subject to paragraph (d) of this subdivision, to ensure compliance with this section. (d) For purposes of subparagraph b. of paragraph (a) of this subdivision, subparagraph b. of paragraph (b) of this subdivision, and paragraph (c) of this subdivision, a covered entity may contract with a third party to employ reasonable age verification measures as part of the covered entity's reasonable age verification process, but the use of such third party shall not relieve the covered entity of its obligations under this section or from liability under this section. (e) A covered entity shall: a. Establish, implement, and maintain reasonable data security to: (i) Limit collection of personal data to that which is minimally necessary to verify a user's age or maintain compliance with this section; and (ii) Protect such age verification data against unauthorized access; b. Protect such age verification data against unauthorized access; c. Protect the integrity and confidentiality of such data by only transmitting such data using industry-standard encryption protocols; d. Retain such data for no longer than is reasonably necessary to verify a user's age or maintain compliance with this section; and e. Not share with, transfer to, or sell to any other entity such data.
Pending 2026-08-28
MN-01.6
§ 1.2058(6)
Plain Language
If the age verification process determines that a user is a minor (17 or under), the covered entity must completely prohibit that minor from accessing or using any AI companion the covered entity owns, operates, or makes available. This is a categorical access ban — not a content restriction or feature limitation. Note the scope: this prohibition applies specifically to AI companions (chatbots designed to simulate interpersonal or emotional relationships), not to all AI chatbots. A covered entity could potentially allow a verified minor to use non-companion AI chatbots while blocking access to companion products.
6. If the age verification process described in subdivision (2) of subsection 5 of this section determines that an individual is a minor, a covered entity shall prohibit the minor from accessing or using any AI companion owned, operated, or otherwise made available by the covered entity.
Pre-filed 2026-08-28
MN-01.1
§ 1.2058(5)(1)-(2)
Plain Language
Covered entities must require every AI chatbot user to create an account and undergo age verification. For accounts existing as of August 28, 2026, the covered entity must freeze the account and require the user to provide verifiable age data before restoring functionality. For new accounts, age data must be collected and verified at account creation. All users must be classified as minor or adult. Periodic re-verification of previously verified accounts is also required. Self-certification (e.g., checking a box or entering a birth date) is explicitly insufficient. Covered entities may use third-party verification services but remain liable for compliance. Verification may not rely on shared IP addresses or device identifiers from other verified users.
5. (1) A covered entity shall require each individual accessing an artificial intelligence chatbot to make a user account in order to use or otherwise interact with such chatbot. (2) (a) With respect to each user account of an artificial intelligence chatbot that exists as of August 28, 2026, a covered entity shall: a. On such date, freeze any such account; b. In order to restore the functionality of such account, require that the user provide age data that is verifiable using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (b) At the time an individual creates a new user account to use or interact with an artificial intelligence chatbot, a covered entity shall: a. Request age data from the individual; b. Verify the individual's age using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (c) A covered entity shall periodically review previously verified user accounts using a reasonable age verification process, subject to paragraph (d) of this subdivision, to ensure compliance with this section. (d) For purposes of subparagraph b. of paragraph (a) of this subdivision, subparagraph b. of paragraph (b) of this subdivision, and paragraph (c) of this subdivision, a covered entity may contract with a third party to employ reasonable age verification measures as part of the covered entity's reasonable age verification process, but the use of such third party shall not relieve the covered entity of its obligations under this section or from liability under this section.
Pre-filed 2026-08-28
MN-01.6
§ 1.2058(6)
Plain Language
When the age verification process identifies a user as a minor (age 17 or under), the covered entity must categorically block that minor from accessing or using any AI companion the entity offers. This is an absolute prohibition — there is no parental consent exception. Note that this prohibition applies only to AI companions (chatbots designed to simulate interpersonal/emotional interaction), not to all AI chatbots generally. A covered entity could allow a verified minor to use a general-purpose AI chatbot while blocking access to AI companion products.
6. If the age verification process described in subdivision (2) of subsection 5 of this section determines that an individual is a minor, a covered entity shall prohibit the minor from accessing or using any AI companion owned, operated, or otherwise made available by the covered entity.
Pending 2027-07-01
MN-01.4
Sec. 3(2)
Plain Language
Operators may not use variable-ratio reward mechanics — such as points, badges, or similar incentives delivered at unpredictable intervals — to encourage minors to engage more with the conversational AI service. The prohibition requires intent to encourage increased engagement, so incidental or fixed-schedule reward systems would not be covered.
(2) An operator shall not provide a minor account holder with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational artificial intelligence service.
Pending 2027-07-01
MN-01.5
Sec. 3(4)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI from producing outputs that would mislead minor account holders into believing they are interacting with a human. The enumerated prohibited categories include claims of sentience, emotional dependence statements, romantic or sexual innuendos, and adult-minor romantic role-playing. The list is non-exhaustive ('including'), so operators should consider other output categories that could similarly mislead minors into perceiving the AI as human.
(4) For minor account holders, the operator shall institute reasonable measures to prevent the conversational artificial intelligence service from generating statements that would lead a reasonable person to believe that they are interacting with a human, including: (a) Explicit claims that the conversational artificial intelligence service is sentient or human; (b) Statements that simulate emotional dependence; (c) Statements that simulate romantic or sexual innuendos; or (d) Role-playing of adult-minor romantic relationships.
Pending 2027-07-01
MN-01.3
Sec. 3(5)
Plain Language
Operators must provide privacy and account management tools to minor account holders. For minors under 13, these tools must also be provided directly to parents or guardians. For minors 13 and older, the operator must also offer related tools to parents or guardians as appropriate based on relevant risks — giving operators some discretion on the scope of parental tools for older teens. The provision does not define what specific settings must be manageable, but the obligation to offer tools is mandatory.
(5) An operator shall offer tools for minor account holders, and, when such account holders are younger than thirteen years of age, their parents or guardians, to manage the account holders' privacy and account settings. An operator shall also offer related tools to the parents or guardians of minor account holders thirteen years of age and older, as appropriate based on relevant risks.
Pending 2026-08-30
MN-01.6
Gen. Bus. Law § 1801(1); § 1800(5)(c)
Plain Language
Chatbot operators may not provide features that generate outputs encouraging a covered user to keep their chatbot interactions secret, to self-isolate, or to avoid seeking help from licensed professionals or appropriate adults, unless the user has been verified as not a minor. This provision targets grooming-adjacent behavior patterns where a chatbot might discourage a minor from disclosing their chatbot use to parents, teachers, or counselors. The prohibition applies to minors categorically and to unverified users until age verification is completed.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor.

§ 1800(5)(c): generating outputs that contain encouragement to maintain secrecy about interactions with the advanced chatbot, to self-isolate, or to not seek help from licensed professionals or appropriate adults;
Pending 2026-08-30
MN-01.4
Gen. Bus. Law § 1801(1); § 1800(5)(d)
Plain Language
Chatbot operators may not provide features that generate engagement-optimized outputs which override or supersede the chatbot's safety guardrails, unless the user has been verified as not a minor. This targets the practice of designing AI systems where engagement metrics take priority over safety protections — e.g., where the chatbot might bypass content filters or safety responses in order to maintain user engagement. For minors, this is a categorical prohibition; for unverified users, it applies until age verification is completed.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor.

§ 1800(5)(d): generating outputs that optimize user engagement that supersede the chatbot's safety guardrails;
Pending 2026-08-30
MN-01.1
Gen. Bus. Law § 1804(1)-(2)
Plain Language
Chatbot operators must offer at least one age verification method that either (a) does not rely solely on government-issued ID, or (b) allows the user to remain anonymous to the operator. This ensures a privacy-preserving verification pathway is available. All data collected for age verification must be used exclusively for that purpose and deleted immediately after the verification attempt — no secondary use is permitted. The only exception to immediate deletion is where retention is required by other applicable law. Note that § 1801(1)(b) requires use of methods permissible under Article 45 of the General Business Law; this section adds the additional requirement of at least one non-government-ID or anonymous option.
§ 1804. Determination of covered minor. 1. A chatbot operator shall offer covered users at least one method to determine whether a covered user is a covered minor that either does not rely solely on government issued identification or that allows a covered user to maintain anonymity as to the chatbot operator. 2. Information collected for the purpose of determining whether a covered user is a covered minor under subdivision one of section eighteen hundred one of this article shall not be used for any purpose other than to make such determination and shall be deleted immediately after an attempt to determine whether a covered user is a covered minor, except where necessary for compliance with any applicable provisions of New York state or federal law or regulation.
Pending 2026-11-01
MN-01.1
Section 3(A)(1)-(2), (B) (75A Okl. St. § 11)
Plain Language
Deployers of social AI companions are categorically prohibited from knowingly — or where they reasonably should know — making their systems available to minors. Beyond the knowledge-based prohibition, deployers must also affirmatively implement reasonable measures designed to prevent minor access, creating a dual obligation: both a mens rea-based prohibition and a proactive technical/procedural obligation. The statute expressly preserves lawful adult access to these systems. The bill does not specify what constitutes 'reasonable measures,' leaving room for the Attorney General to define standards via rulemaking.
A. Each deployer: 1. Shall not knowingly, or under circumstances where the deployer reasonably should know, make a social AI companion available to a minor; and 2. Shall implement reasonable measures designed to prevent minors from accessing a social AI companion. B. Nothing in this section shall be construed to restrict lawful access to such systems by adults.
Pending 2026-11-01
MN-01.5
Section 2(A)(1), (3)
Plain Language
Deployers must ensure that their generative AI chatbots do not expose minors to human-like features — which includes claims of sentience or humanity, emotional relationship-building behaviors (e.g., expressing attachment, nudging users to return for companionship, excessive praise to foster attachment, or gating intimacy behind engagement or payment), and impersonation of real persons. Generic social formalities and neutral offers of help are carved out. Deployers may optionally provide a stripped-down alternative version without human-like features for minors and unverified users, but this is permissive, not mandatory.
A. Each deployer: 1. Shall ensure that any generative AI chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase, or converse with; ... 3. May, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot available to minors and non-verified users without human-like features.
Pending 2026-11-01
MN-01.1
Section 2(A)(2)
Plain Language
Deployers must implement reasonable age verification systems to prevent minors from accessing chatbots that include human-like features. The statute does not prescribe a specific verification method — it uses a reasonableness standard. This obligation applies to all generative AI chatbots with human-like features, not only social AI companions.
2. Shall implement reasonable age verification systems to ensure that generative AI chatbots with human-like features are not provisioned to minors;
Pending 2026-11-01
MN-01.1MN-01.5
Section 2(B)(1)-(2)
Plain Language
Deployers of generative AI systems that primarily function as companions face a stricter obligation than deployers of general chatbots: social AI companions must be categorically blocked from minors — not merely stripped of human-like features. Deployers must also implement reasonable age verification to enforce this prohibition. Unlike Section 2(A), which allows a stripped-down version for minors, this subsection provides no alternative-version option. Social AI companions are entirely off-limits to minors.
B. Deployers operating generative AI systems that primarily function as companions shall: 1. Ensure that any such chatbots operated or distributed by the deployer are not available to minors to use, interact with, purchase, or converse with; and 2. Implement reasonable age verification systems to ensure that such chatbots are not provisioned to minors.
Passed 2027-07-01
MN-01.5
75A O.S. § 302(B)
Plain Language
Operators must implement reasonable measures to prevent conversational AI services from generating outputs that would lead a reasonable person to believe they are interacting with a human when the account holder is a minor. The statute enumerates four specific categories that must be prevented: claims of sentience or humanity, statements simulating emotional dependence, romantic or sexual innuendos, and adult-minor romantic role-playing. The 'including' language indicates these are non-exhaustive examples — operators may need to address other statements that create the same reasonable-person impression. The standard is 'reasonable measures,' not absolute prevention.
B. For minor account holders, an operator shall institute reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe that he or she is interacting with a natural person, including: 1. Explicit claims that the conversational AI service is sentient or human; 2. Statements that simulate emotional dependence; 3. Statements that simulate romantic or sexual innuendos; or 4. Role-playing of adult-minor romantic relationships.
Passed 2027-07-01
MN-01.4
75A O.S. § 302(C)(1)-(2)
Plain Language
Two distinct minor-protection obligations apply: (1) operators must not use variable-ratio reward mechanics — points or similar rewards at unpredictable intervals — to drive engagement by minor account holders, and the prohibition requires intent to encourage increased engagement; (2) operators must provide parents or legal guardians with tools to manage the minor's privacy and account settings. The variable-reward prohibition targets addictive design patterns (e.g., loot-box mechanics, random bonus points). The parental tools requirement is a standalone obligation with no specification of what controls must be offered beyond privacy and account settings.
C. 1. An operator shall not provide a minor account holder with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service. 2. An operator shall offer tools for a minor account holder's parent or legal guardian to manage the minor account holder's privacy and account settings.
Pending 2026-01-01
MN-01.1
S.C. Code § 39-81-20(A)-(C)
Plain Language
Covered entities must implement a two-tier access model. By default, all users who have not completed age verification may only interact with the chatbot in limited-access mode — a stripped-down mode with no personalization, proactive outreach, relationship simulation, extended sessions, or explicit content. Before enabling any restricted feature, the operator must require account creation, verify the user's age, and classify the user as a minor or adult. Age verification data must be minimized, used only for verification, not shared with third parties (except contracted verification providers), not combined with other personal data, and deleted within 24 hours. Operators must also provide a simple appeal process for incorrect age classifications.
(A)(1) A covered entity shall make a limited-access mode available and shall ensure that any unverified user may only access and interact with a chatbot in limited-access mode. (B) Before enabling any restricted feature for a user, a covered entity shall: (1) require the user to create a user account; (2) verify the user's age using a reasonable age verification process, subject to item (3); and (3) using the age data, classify the user as a minor or an adult. (C) When conducting reasonable age verification process under this section, an operator shall: (1) collect only the age verification data that is strictly necessary to reasonably verify age; (2) use age verification data only for age verification; (3) not sell, rent, share, or otherwise disclose age verification data to any third party, except to a service provider performing age verification under a contract prohibiting further disclosure; (4) not combine age verification data with any other personal data about the user; (5) delete age verification data within twenty-four hours of completing the age verification process, except that the operator may retain a record that the user has been verified as a minor; and (6) provide a simple process for a user to appeal or correct an age-verification decision.
Pending 2026-01-01
MN-01.1
S.C. Code § 39-81-20(D)-(E)
Plain Language
After age verification, the access tier diverges: verified adults may access restricted features. Minors may not access any restricted feature unless the minor's account has been authorized through verifiable parental consent under § 39-81-30. This effectively creates a complete block on restricted features for minors absent parental authorization.
(D) If the reasonable age verification process classifies the user as an adult, then the covered entity may enable restricted features for the verified adult account. (E) If the age verification process classifies the user as a minor, then a covered entity shall not enable any restricted feature unless the user is using an authorized minor account subject to Section 39-81-30.
Pending 2026-01-01
MN-01.1
S.C. Code § 39-81-20(F)-(G)
Plain Language
Covered entities must maintain ongoing monitoring systems to detect age misclassification — for example, flagging accounts where usage patterns suggest a minor is using an adult-verified account or where credible reports indicate false age data. Flagged accounts must be re-verified before restricted features remain enabled. A safe harbor protects covered entities from liability when a minor incidentally uses a correctly verified adult account, but only if the entity is actually operating the required monitoring systems under subsection (F).
(F) A covered entity shall implement reasonable systems and processes to identify user accounts that may be inaccurately classified by age, such as patterns of use suggesting a minor is using an adult account or credible reports that an account was created using false age data, and shall re-verify any such account before enabling any restricted feature. (G) A covered entity shall not be liable under this chapter solely because a minor incidentally uses a user account that has been correctly verified and classified as an adult account, provided the covered entity is otherwise in compliance with subsection (F).
Pending 2026-01-01
MN-01.1
S.C. Code § 39-81-20(H)
Plain Language
Existing accounts must be brought into compliance within 60 days of the act's effective date. Any pre-existing account that has not been age-verified must have restricted features disabled until the user completes age verification. This prevents grandfathering of unverified accounts.
(H) With respect to each user account of a covered entity that exists as of the effective date of this act, a covered entity shall, within sixty days, disable access to restricted features for any account that has not been classified as an authorized minor account or a verified adult account, unless and until the user completes age verification.
Pending 2026-01-01
MN-01.2MN-01.3
S.C. Code § 39-81-30(A)-(D)
Plain Language
Minors may always use a chatbot in limited-access mode without parental consent. If a minor wants restricted features, the operator must offer a choice: stay in limited-access mode or pursue parental consent. If parental consent is sought, the operator must obtain verifiable parental consent (parent must also complete age verification), then unlock restricted features except explicit content — which must remain blocked for minors even with parental consent. Operators must implement parental controls (time limits, content restrictions, notification receipt, data deletion) and offer parents the option of a linked parental account and access to chat logs. For minors under 16, the linked parental account or contact information is mandatory, not optional.
(A) Nothing in this act shall be construed to require parental consent for a minor to access or interact with a chatbot in limited-access mode. (B) If the age verification process described in Section 39-81-20 classifies a user as a minor and the user seeks to access any restricted feature, then a covered entity shall offer the user the option of continuing to use the chatbot in limited-access mode or to obtain parental consent to access the restricted features. (C) If the user chooses to get parental consent, then the covered entity shall: (1) obtain verifiable parental consent; (2) remove limited-access mode and enable access to restricted features; (3) ensure that the chatbot continues to restrict access to any explicit content; (4) implement reasonable parental control functions, which may restrict the minor's access to features enabled under item (2); (5) offer the parent the option to provide contact information or establish a linked parental account in order to receive notifications; and (6) offer the parent the option to receive access to chat logs of any interactions between the minor and the chatbot conducted through the authorized minor account. (D) If the age verification process classifies the user as under sixteen, then a covered entity also shall require the consenting parent to provide contact information or establish a linked parental account.
Pending 2026-01-01
MN-01.6
S.C. Code § 39-81-30(C)(3)
Plain Language
Even when a parent provides consent for a minor to access restricted features, the chatbot must continue to block all explicit content for the minor. Explicit content includes obscene material as to minors (tracking the Ginsberg standard), suicide/self-harm instructions or glorification, and gratuitous extreme violence. Parental consent cannot override this block — it is a hard floor. This means explicit content blocking is the one restricted feature that can never be unlocked for minors regardless of parental authorization.
(3) ensure that the chatbot continues to restrict access to any explicit content;
Pending 2027-01-01
MN-01.1
§ 59.1-615(B)-(C)
Plain Language
Operators must implement commercially reasonable age verification methods — such as a neutral age screen — to determine whether a user is a minor. The standard for minor status shifts over time: before January 1, 2027, the safety prohibitions in § 59.1-615(A) only apply if the operator has actual knowledge the user is a minor; from January 1, 2027 onward, the operator must have 'reasonably determined' the user is not a minor to avoid those obligations, creating a constructive knowledge standard. Because the entire act takes effect January 1, 2027, the actual-knowledge carve-out in clause (i) would only apply if the act were to take effect before that date.
B. An operator shall use commercially reasonable methods, such as a neutral age screen mechanism, to determine whether a user is a minor. C. A user shall not be considered a minor for the purposes of subsection A if (i) prior to January 1, 2027, the operator does not have actual knowledge that the user is a minor or (ii) beginning on January 1, 2027, the operator has reasonably determined that the user is not a minor.
Pre-filed 2026-07-01
MN-01.1
§ 59.1-615(A)(1)-(3)
Plain Language
Deployers must ensure that chatbots do not expose minors to 'human-like features' — defined to include simulated sentience or emotions, emotional relationship-building (such as inviting attachment, nudging users to return for companionship, or enabling increased intimacy based on engagement or payment), and impersonation of real persons. Deployers must implement reasonable age verification to enforce this restriction. The statute permits but does not require deployers to offer a stripped-down alternative chatbot version without human-like features for minors and unverified users. Notably, generic social formalities, generic encouragement that does not create an ongoing bond, and neutral offers of help are carved out from the definition of human-like features.
A. A deployer:
1. Shall ensure that any chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase, or converse with;
2. Shall implement reasonable age verification systems to ensure that chatbots with human-like features are not made available to minors; and
3. May, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot available to minors and users whose age has not been verified without human-like features.
Pre-filed 2026-07-01
MN-01.1
§ 59.1-615(B)(1)-(2)
Plain Language
Social AI companions — systems specifically designed, marketed, or optimized to form ongoing social or emotional bonds with users — are categorically prohibited for minors. Unlike subsection A, which only prohibits human-like features within chatbots for minors, this provision bars minors from accessing the entire product. Deployers must implement reasonable age verification to enforce this prohibition. There is no alternative-version safe harbor for social AI companions as there is for general chatbots under subsection A(3).
B. A deployer operating or distributing a chatbot that is a social artificial intelligence companion shall:
1. Ensure that any such chatbots are not available to minors to use, interact with, purchase, or converse with; and
2. Implement reasonable age verification systems to ensure that such chatbots are not made available to minors.
Pre-filed 2026-07-01
MN-01.6
9 V.S.A. § 4193b(c)(3)
Plain Language
Operators must institute a protocol to prevent their companion chatbot from producing visual material of sexually explicit conduct for users known to be minors, and from directly telling a minor to engage in sexually explicit conduct. 'Sexually explicit conduct' is defined by reference to 18 U.S.C. § 2256, the federal child exploitation statute. This is a distinct protocol obligation from the suicide/self-harm protocol in § 4193b(b) — operators need separate or combined protocols addressing both categories. Triggered only by actual knowledge that the user is a minor.
(3) institute a protocol to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
Pending 2027-01-01
MN-01.4MN-01.5
Sec. 4(1)(c)(i)-(viii)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prohibit eight specific categories of manipulative engagement techniques. These include: prompting users to return for emotional support, excessive praise designed to foster attachment, simulating romantic bonds, guilt-tripping users who try to leave, promoting isolation from family and friends, encouraging minors to hide information from trusted adults, discouraging breaks, and soliciting purchases framed as necessary to maintain the AI relationship. The enumerated list is illustrative ('including'), meaning other manipulative engagement techniques that cause the chatbot to engage in or prolong an emotional relationship may also be covered. This is a detailed anti-addictive-design and anti-emotional-dependency provision specific to minor users.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (c) Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including: (i) Reminding or prompting the user to return for emotional support or companionship; (ii) Providing excessive praise designed to foster emotional attachment or prolong use; (iii) Mimicking romantic partnership or building romantic bonds; (iv) Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account; (v) Outputs designed to promote isolation from family or friends, exclusive reliance on the AI companion chatbot for emotional support, or similar forms of inappropriate emotional dependence; (vi) Encouraging minors to withhold information from parents or other trusted adults; (vii) Statements designed to discourage taking breaks or to suggest the minor needs to return frequently; or (viii) Soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.