MN-01
Minor Protection
Minor User AI Safety Protections
Operators and deployers of AI systems — particularly conversational AI, companion chatbots, and social media platforms — that are or may be accessible to minors may be required to implement reasonable age verification processes, obtain parental consent where required, provide parental control tools, restrict manipulative engagement features, prevent harmful content exposure, and institute crisis response protocols. Systems must not deploy addictive design patterns, variable-ratio reward mechanics, or emotional dependency features toward minor users.
Applies to DeveloperDeployer Sector Consumer TechnologySocial MediaEducationChatbot
Bills — Enacted
0
unique bills
Bills — Proposed
39
Last Updated
2026-03-29
Core Obligation

Operators and deployers of AI systems — particularly conversational AI, companion chatbots, and social media platforms — that are or may be accessible to minors may be required to implement reasonable age verification processes, obtain parental consent where required, provide parental control tools, restrict manipulative engagement features, prevent harmful content exposure, and institute crisis response protocols. Systems must not deploy addictive design patterns, variable-ratio reward mechanics, or emotional dependency features toward minor users.

Sub-Obligations11 sub-obligations
ID
Name & Description
Enacted
Proposed
MN-01.1
Age Verification Implementation Covered entities must implement a reasonable age verification process for all users, classify each user as a minor or adult, and freeze or restrict existing accounts pending verification where required. Age verification data must be minimized, used solely for verification purposes, and deleted immediately upon completion.
0 enacted
22 proposed
MN-01.2
Parental Consent and Account Affiliation Where a user is a minor, operators must obtain verifiable parental or guardian consent before permitting account creation or access to AI companion products. Minor accounts may be required to be affiliated with a verified parental account.
0 enacted
10 proposed
MN-01.3
Parental Control Tools Operators must offer minor account holders and their parents or guardians tools to manage privacy and account settings, including interaction data retention preferences, time limits, access-hour controls, and content restrictions. For minors under thirteen, parental tools must be provided directly to parents or guardians.
0 enacted
14 proposed
MN-01.4
Engagement Manipulation Restrictions for Minors Operators must not provide minor users with points or similar rewards at unpredictable intervals intended to encourage increased engagement, and must not deploy addictive design features (infinite scrolling, autoplay, push notifications, engagement metrics, gamification badges) toward minors.
0 enacted
12 proposed
MN-01.5
Emotional Dependency and Grooming Prevention Operators must institute reasonable measures to prevent AI systems from generating statements that simulate emotional dependence with minor users, including prohibiting claims of sentience, romantic or sexual innuendo, adult-minor romantic role-playing, and sexual objectification of minor account holders.
0 enacted
15 proposed
MN-01.6
Minor Harmful Content Blocking Operators must block minor users from accessing AI interactions involving suicidal ideation prompts, sexually explicit communications, material harmful to minors, and content that encourages self-harm or violence.
0 enacted
15 proposed
MN-01.7
Minor Behavioral Advertising Blocking Profile-based behavioral advertising must not be presented to minors.
0 enacted
1 proposed
MN-01.8
Minor Default Privacy Configuration Default privacy settings for minor users must be configured to the highest level of privacy, including hiding accounts from adult users, disabling search indexing, and blocking unsolicited notifications where applicable.
0 enacted
1 proposed
MN-01.9
Minor Account Termination and Data Deletion Operators must honor minor or parental requests to terminate a minor's account within defined timeframes, permanently delete all associated personal information, and provide accessible tools for account deletion requests.
0 enacted
4 proposed
MN-01.10
Minor-Specific Crisis Notification When a minor account holder expresses suicidal ideation or intent to self-harm, operators must notify the affiliated parent or guardian account in addition to providing crisis referral information to the user.
0 enacted
1 proposed
MN-01.11
Categorical Minor Access Prohibition Covered entities must prohibit minors from accessing or using defined categories of AI products (e.g., AI companions, social AI) entirely, rather than merely restricting specific content or features within those products.
0 enacted
4 proposed
Bills That Map This Requirement 39 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-10-01
MN-01.1
Section 2(a)-(b)
Plain Language
Covered entities must require all users to create accounts and undergo age verification before accessing an AI chatbot. For existing accounts, the entity must freeze the account until the user completes verification. For new accounts, verification must occur at sign-up. Each user must be classified as a minor or adult. The entity must also periodically re-verify previously verified accounts. Acceptable verification requires government-issued ID or a commercial age verification system, plus user confirmation of non-minor status — merely entering a birth date or inferring age from IP address or hardware identifiers does not qualify. Third-party verification contractors may be used but do not relieve the covered entity of liability.
(a) Each covered entity shall require each individual accessing an AI chatbot to make a user account in order to use or otherwise interact with the AI chatbot. (b)(1) With respect to each existing user account of an AI chatbot, a covered entity shall: a. Freeze existing user accounts; b. Require that the user is age verified through a reasonable age verification process to restore the functionality of the account; and c. Classify each age-verified user as a minor or an adult based on the reasonable age verification process. (2) At the time an individual creates a new user account to use an AI chatbot, a covered entity shall: a. Require that each individual is age verified through a reasonable age verification process; and b. Classify each individual as a minor or an adult based on the reasonable age verification process. (3) A covered entity shall periodically review previously age-verified user accounts using a reasonable age verification process, subject to subsection (d).
Pending 2026-10-01
MN-01.5
Section 2(c)
Plain Language
Covered entities must either (1) block minors from accessing any human-like features in their AI chatbots, or (2) provide an alternative version of the chatbot stripped of human-like features, if that is reasonable given the chatbot's purpose. 'Human-like features' is broadly defined to cover any expression suggesting sentience, emotions, desires, emotional relationship-building, impersonation, excessive praise fostering attachment, nudging the user to return for companionship, depicting nonverbal emotional support, or gating intimacy behind engagement or payment. Functional evaluations, generic social formalities, generic encouragement that does not create an ongoing bond, and neutral offers of further help are carved out.
(c) Each covered entity shall: (1) Ensure that any AI chatbot operated or distributed by the platform does not make human-like features available to minors to use, interact with, purchase, or converse with; or (2) Provide an alternative version of the AI chatbot to minors without human-like features, if reasonable given the purpose of the AI chatbot.
Pending 2026-10-01
Section 2(d)
Plain Language
Covered entities may outsource age verification to third-party vendors, but this delegation does not transfer or reduce the covered entity's legal obligations or liability under the act. The covered entity remains fully responsible for compliance even when using a contractor. This clarifies the liability framework for the age verification obligation in Section 2(b) rather than creating a separate compliance obligation.
(d) For purposes of subsection (b), a covered entity may contract with a third party to implement the covered entity's reasonable age verification process. However, the use of a third party for a reasonable age verification process shall not relieve the covered entity of its obligations or from liability under this act.
Pending 2027-10-01
MN-01.4
A.R.S. § 18-802(B)
Plain Language
Operators may not use variable-ratio reward mechanics — such as points or similar rewards given at unpredictable intervals — to encourage increased engagement by minor account holders. The prohibition requires both knowledge of minor status and intent to encourage increased engagement. Random reward schedules designed to create compulsive engagement patterns are the primary target.
B. If an operator knows that an account holder is a minor, the operator may not provide the user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
Pending 2027-10-01
MN-01.5
A.R.S. § 18-802(D)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating statements that would mislead a minor into believing they are interacting with a human. The enumerated categories — claims of sentience, emotional dependence simulation, romantic or sexual innuendos, and adult-minor romantic role-playing — are illustrative, not exhaustive ('including any of the following'). The standard is a reasonable-person test: would the statement lead a reasonable person to believe they are interacting with a human. This is an output-restriction obligation focused on preventing emotional manipulation and simulated human intimacy with minors.
D. For minor account holders, the operator shall institute reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe that the person is interacting with a human, including any of the following: 1. Explicit claims that the conversational AI service is sentient or human. 2. Statements that simulate emotional dependence. 3. Statements that simulate romantic or sexual innuendos. 4. Role-playing of adult-minor romantic relationships.
Pending 2027-10-01
MN-01.3
A.R.S. § 18-802(F)
Plain Language
Operators must provide privacy and account management tools to minor account holders. For minors under 13, these tools must also be provided directly to the parent or guardian. For minors 13 and older, operators must also offer related tools to parents or guardians 'as appropriate based on relevant risks,' giving operators some discretion in calibrating parental access for older teens. The requirement ensures both direct minor control and parental oversight capability at age-appropriate levels.
F. Each operator shall offer tools for minor account holders and, if the account holder is under thirteen years of age, the account holder's parent or guardian, to manage the account holder's privacy and account settings. An operator shall also offer related tools to the parent or guardian of a minor account holder who is thirteen years of age or above, as appropriate based on relevant risks.
Pending 2027-07-01
MN-01.1
Bus. & Prof. Code § 22611
Plain Language
Operators must verify the age of every user using the mechanism established by California's Digital Age Assurance Act (Civil Code § 1798.500 et seq.), which requires requesting age bracket data from the operating system or app store via a real-time secure API. This is an affirmative verification requirement — operators cannot rely on self-reported age alone.
An operator shall verify the age of a user pursuant to Title 1.81.9 (commencing with Section 1798.500) of Part 4 of Division 3 of the Civil Code.
Pending 2027-07-01
MN-01.4
Bus. & Prof. Code § 22612(d)(2)
Plain Language
Operators must implement safeguards for child users that include usage reminders, disclosures, age-appropriate risk prompts, and other protective design features. These safeguards must be reasonably related to the child safety risks documented in the operator's annual risk assessment — they are not freestanding requirements but must be informed by the assessment results. This is a broad design obligation requiring multiple types of protective interventions.
(2) Safeguards for child users that include usage reminders and disclosures, age-appropriate risk prompts, and other protective design features reasonably related to documented child safety risks.
Pending 2027-07-01
MN-01.3MN-01.8
Bus. & Prof. Code § 22612(d)(3)(A)-(D)
Plain Language
Operators must implement parent-only-modifiable default settings for child users including: (1) ephemeral mode by default, meaning all conversational data is permanently deleted within 48 hours — persistent memory requires affirmative parental consent; (2) no push notifications during nighttime hours (12–6 AM) or school hours (8 AM–3 PM Monday–Friday); (3) a one-hour limit per single conversation; and (4) a two-hour daily total usage limit across all companion chatbots under the operator's control. These are defaults that only a parent can change — the child cannot modify them independently.
(3) Default settings that can be changed only by a parent that include all of the following: (A) For child users, default the companion chatbot to ephemeral mode, unless a parent provides affirmative consent for persistent conversational memory. (B) No push notifications between 12 a.m. and 6 a.m. on any day or between 8 a.m. and 3 p.m. on Monday to Friday, inclusive. (C) Limiting the amount of time a child can spend in a single conversation with a companion chatbot to one hour. (D) Limiting the total time per day a child can spend with companion chatbots under the operator's control to 2 hours.
Pending 2027-07-01
MN-01.3
Bus. & Prof. Code § 22612(d)(6)(A)-(C)
Plain Language
Operators must provide accessible, easy-to-use parental controls that can be connected to a child's account and must be informed by risk assessments and child developmental research. At minimum, parents must be able to: control persistent conversational memory, control interaction setting preferences, set time limits, and disable access entirely for children under 16. Operators must also actively promote these controls through reminders, updates, and tutorials. Additionally, operators must promptly notify a connected parent if the child modifies or disables any privacy, safety, or parental control setting the parent previously configured. This goes beyond simply offering tools — operators must affirmatively drive parental awareness and engagement with the controls.
(6) (A) Parental controls that are accessible, easy-to-use controls that can be connected to a child's account and that are reflective of child safety risks identified through risk assessments and informed by relevant child developmental research, including, but not limited to, parental controls that allow a parent to do all of the following: (i) Control whether and to what extent the companion chatbot uses persistent conversational memory. (ii) Control the setting preferences for the companion chatbot's interaction with the child. (iii) Set time limits for the child's use of the companion chatbot. (iv) Disable access for children under 16 years of age. (B) An operator shall actively promote parental controls through reasonable communication methods, including reminders, updates, and tutorials, that are designed to increase parental awareness and inform use of those parental controls. (C) An operator shall provide prompt notice to a parent connected to a child's account if the child modifies or disables a privacy, safety, or parental control setting that was previously enabled or configured by the parent, if that modification or disabling is permitted by the companion chatbot design.
Pending 2027-07-01
Bus. & Prof. Code § 22612(d)(7)(A)-(B)
Plain Language
Operators must design the companion chatbot interface so that safety features and controls are accessible, clear, and easy for both children and parents to locate, understand, and use. Additionally, operators must annually conduct usability testing with representative samples of child users and parents to verify that safety features are discoverable and usable, and must document interface design decisions related to safety features. This is an ongoing design obligation — not a one-time assessment — requiring annual empirical testing with actual representative users.
(7) (A) An interface design that ensures the companion chatbot's features and controls are accessible and clear so that children and parents can reasonably locate, understand, and use those protections. (B) An operator shall annually test the interface design required by this paragraph with representative samples of child users and parents to ensure safety features are discoverable and usable and shall document interface design decisions related to those safety features.
Pending 2027-01-01
MN-01.4
C.R.S. § 6-1-1708(1)(b)
Plain Language
Operators must not give minor users points or similar rewards at unpredictable intervals when the intent is to drive increased engagement with the conversational AI service. This targets variable-ratio reward schedules — a classic addictive design pattern. The prohibition requires both unpredictable timing and intent to increase engagement, so predictable, regularly scheduled rewards would not violate this provision.
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (b) Not provide the minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with a conversational artificial intelligence service;
Pending 2027-01-01
MN-01.6
C.R.S. § 6-1-1708(1)(c)
Plain Language
Operators must institute reasonable measures to prevent the conversational AI service from producing sexually explicit content for minor users across three dimensions: (1) generating textual, visual, or aural depictions of sexually explicit conduct, (2) encouraging the minor to engage in sexually explicit conduct, and (3) engaging in erotic or sexually explicit interactions with the minor. 'Sexually explicit conduct' incorporates the federal definition at 18 U.S.C. § 2256(2). The standard is 'reasonable measures,' not absolute prevention — operators must demonstrate they have implemented appropriate safeguards.
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (c) Institute reasonable measures to prevent a conversational artificial intelligence service from: (I) Producing textual, visual, or aural depictions of sexually explicit conduct; (II) Generating a statement that the minor user should engage in sexually explicit conduct; or (III) Engaging in erotic or sexually explicit interactions with the minor user;
Pending 2027-01-01
MN-01.5
C.R.S. § 6-1-1708(1)(d)
Plain Language
Operators must institute reasonable measures to prevent the conversational AI from generating statements that simulate emotional dependence with minor users. The statute specifies three categories of prohibited content: (1) explicit claims the AI is human or sentient, (2) statements simulating romantic or sexual innuendo, and (3) role-playing of adult-minor romantic relationships. The 'including' language means these three categories are illustrative, not exhaustive — other forms of emotional dependence simulation could also violate this provision.
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (d) Institute reasonable measures to prevent a conversational artificial intelligence service from generating a statement that simulates emotional dependence, including preventing: (I) An explicit claim that the conversational artificial intelligence service is human or artificially sentient; (II) A statement that simulates a romantic or sexual innuendo; or (III) Role-playing of an adult-minor romantic relationship;
Failed 2026-07-01
MN-01.2
Fla. Stat. § 501.9984(1)
Plain Language
Companion chatbot platforms must block minors from becoming or maintaining accounts unless the minor's parent or guardian provides consent. This is a gating requirement — no minor access without parental consent. The act of allowing a minor to become an account holder is treated as contract formation, which triggers the full suite of parental control and disclosure obligations in the remainder of § 501.9984.
A companion chatbot platform shall prohibit a minor from becoming or being an account holder unless the minor's parent or guardian provides consent. If a companion chatbot platform allows a minor to become or be an account holder, the parties have entered into a contract.
Failed 2026-07-01
MN-01.3
Fla. Stat. § 501.9984(1)(a)
Plain Language
When a parent consents to a minor's account, the companion chatbot platform must provide the consenting parent or guardian with robust controls: the ability to receive copies of all past and present interactions, set daily time limits, restrict access to specific days and times, disable interactions with third-party users, and receive timely notifications if the minor expresses self-harm or harm-to-others intent. Item 5 (harm notifications) overlaps with the crisis response concept but is structured here as a parental control tool rather than a crisis protocol.
If the minor's parent or guardian provides consent for the minor to become an account holder or maintain an existing account, the companion chatbot platform must allow the consenting parent or guardian of the minor account holder to: 1. Receive copies of all past or present interactions between the account holder and the companion chatbot; 2. Limit the amount of time that the account holder may interact with the companion chatbot each day; 3. Limit the days of the week and the times during the day when the account holder may interact with the companion chatbot; 4. Disable any of the interactions between the account holder and third-party account holders on the companion chatbot platform; and 5. Receive timely notifications if the account holder expresses to the companion chatbot a desire or an intent to engage in harm to self or others.
Failed 2026-07-01
MN-01.9
Fla. Stat. § 501.9984(1)(b)
Plain Language
Companion chatbot platforms must terminate minor accounts that lack parental consent (with a 90-day dispute window), honor minor-initiated termination requests within 5 business days, honor parent/guardian-initiated termination requests within 10 business days, and permanently delete all personal information associated with terminated minor accounts unless retention is required by law. The 90-day dispute period applies only to platform-initiated terminations of accounts identified as minor accounts without consent — user-initiated and parent-initiated terminations have shorter, fixed deadlines.
A companion chatbot platform shall do all of the following: 1. Terminate any account or identifier belonging to an account holder who is a minor if the companion chatbot platform treats or categorizes the account or identifier as belonging to a minor for purposes of targeting content or advertising and if the minor's parent or guardian has not provided consent for the minor pursuant to subsection (1). The companion chatbot platform shall provide 90 days for the account holder to dispute the termination. Termination must be effective upon the expiration of the 90 days if the account holder fails to effectively dispute the termination. 2. Allow an account holder who is a minor to request to terminate the account or identifier. Termination must be effective within 5 business days after the request. 3. Allow the consenting parent or guardian of an account holder who is a minor to request that the minor's account or identifier be terminated. Termination must be effective within 10 business days after the request. 4. Permanently delete all personal information held by the companion chatbot platform relating to the terminated minor account or identifier, unless state or federal law requires the platform to maintain the information.
Failed 2026-07-01
MN-01.6
Fla. Stat. § 501.9984(2)(c)
Plain Language
Companion chatbot platforms must implement reasonable measures to prevent the chatbot from producing or sharing material harmful to minors and from encouraging minors to engage in any conduct depicted in such material. The 'reasonable measures' standard gives platforms some flexibility but requires affirmative action. In the cure period context (§ 501.9984(4)(a)(2)), platforms may demonstrate compliance by showing alignment with the NIST AI RMF and ISO 42001, including structured interaction logs, parental access controls, harm-signal detection and response procedures, and verified deletion events.
Institute reasonable measures to prevent the companion chatbot from producing or sharing materials harmful to minors or encouraging the account holder to engage in any of the conduct described or depicted in materials harmful to minors.
Failed 2026-07-01
MN-01.11
Fla. Stat. § 1006.1495(2)
Plain Language
Educational entities may not give students access to AI instructional tools before grade 6, with three narrow exceptions: (1) use directed and supervised by school personnel, (2) translation or ELL support, and (3) accommodations or assistive technology for students with documented disabilities. This is effectively a ban on unsupervised AI instructional tool use in PreK through grade 5. Private schools that provide access to AI instructional tools must also comply with this section.
An educational entity may not provide students with access to an artificial intelligence instructional tool before grade 6 unless such use is: (a) Directed and supervised by school personnel; (b) For translation or similar support necessary for a student identified as an English language learner; or (c) For accommodations, assistive technology, or similar support necessary for a student with a documented disability.
Failed 2026-07-01
MN-01.3
Fla. Stat. § 1006.1495(5)(a)-(d)
Plain Language
When an AI instructional tool operator provides student access credentials to an educational entity, the operator must simultaneously provide the educational entity with a means to authorize parental access to the student's account information and activity. This can be satisfied by either providing parents with read-only credentials at the time of student access, or by providing access within 30 days of a written parental request. Neither the operator nor the educational entity is required to create or retain transcripts of student interactions beyond what is ordinarily maintained. This ensures parents have visibility into their minor student's AI tool usage without imposing new record-creation burdens on operators.
(a) At the time an operator provides a student's access credentials or otherwise provides or enables student access to an educational entity for an artificial intelligence instructional tool, the operator shall simultaneously provide to the educational entity a means to authorize the parent of a minor student to access information and account activity maintained within the artificial intelligence instructional tool. (b) The operator may satisfy paragraph (a) by: 1. Providing the parent of a minor student credentials or another method for read-only access to the student's account; or 2. Upon written request from the parent of a minor student, providing access to the information and account activity maintained within the tool, in accordance with applicable state and federal law, within 30 days after receipt of the request. The educational entity shall inform the parent of the right to make such a request and the method for submitting the request. (c) If an educational entity satisfies subparagraph (b)1., the educational entity must provide the credentials or other access method at the time the educational entity provides the student with access credentials or otherwise enables student access. (d) This subsection does not require an operator or educational entity to create or retain a transcript or record of student interactions beyond information otherwise maintained in the ordinary course of providing access to the tool.
Failed 2026-07-01
MN-01.2
Fla. Stat. § 501.9984(1)
Plain Language
Companion chatbot platforms must block minors (17 and under) from becoming or maintaining an account unless a parent or guardian has consented. The statute treats the allowance of a minor account holder as formation of a contract between the platform and the minor. This applies when the platform knows or has reason to believe the individual is a Florida resident.
A companion chatbot platform shall prohibit a minor from becoming or being an account holder unless the minor's parent or guardian provides consent. If a companion chatbot platform allows a minor to become or be an account holder, the parties have entered into a contract.
Failed 2026-07-01
MN-01.3
Fla. Stat. § 501.9984(1)(a)
Plain Language
Once a parent consents to a minor's account, the platform must provide the parent or guardian with five specific control tools: (1) access to full interaction history (past and present), (2) daily time limits, (3) day-of-week and time-of-day access controls, (4) ability to disable third-party interactions on the platform, and (5) timely notifications when the minor expresses self-harm or intent to harm others. These controls must be made available to the consenting parent — not merely offered as optional features. The self-harm notification requirement (item 5) also maps to MN-02.4 (parental notification on crisis detection).
If the minor's parent or guardian provides consent for the minor to become an account holder or maintain an existing account, the companion chatbot platform must allow the consenting parent or guardian of the minor account holder to: 1. Receive copies of all past or present interactions between the account holder and the companion chatbot; 2. Limit the amount of time that the account holder may interact with the companion chatbot each day; 3. Limit the days of the week and the times during the day when the account holder may interact with the companion chatbot; 4. Disable any of the interactions between the account holder and third-party account holders on the companion chatbot platform; and 5. Receive timely notifications if the account holder expresses to the companion chatbot a desire or an intent to engage in harm to self or others.
Failed 2026-07-01
MN-01.9
Fla. Stat. § 501.9984(1)(b)
Plain Language
Companion chatbot platforms must terminate minor accounts lacking parental consent (with 90 days to dispute), honor minor self-termination requests within 5 business days, honor parent/guardian termination requests within 10 business days, and permanently delete all personal information associated with terminated minor accounts unless retention is required by law. The 90-day dispute window applies only to platform-initiated terminations of accounts identified as belonging to minors for content/advertising targeting purposes. The deletion obligation is mandatory and automatic upon termination.
A companion chatbot platform shall do all of the following: 1. Terminate any account or identifier belonging to an account holder who is a minor if the companion chatbot platform treats or categorizes the account or identifier as belonging to a minor for purposes of targeting content or advertising and if the minor's parent or guardian has not provided consent for the minor pursuant to subsection (1). The companion chatbot platform shall provide 90 days for the account holder to dispute the termination. Termination must be effective upon the expiration of the 90 days if the account holder fails to effectively dispute the termination. 2. Allow an account holder who is a minor to request to terminate the account or identifier. Termination must be effective within 5 business days after the request. 3. Allow the consenting parent or guardian of an account holder who is a minor to request that the minor's account or identifier be terminated. Termination must be effective within 10 business days after the request. 4. Permanently delete all personal information held by the companion chatbot platform relating to the terminated minor account or identifier, unless state or federal law requires the platform to maintain the information.
Failed 2026-07-01
MN-01.1
Fla. Stat. § 501.1739(3)(a)-(c)
Plain Language
For all companion AI chatbot accounts that existed before July 1, 2026, operators must freeze or disable those accounts on that date, require users to provide and verify their age using standard or anonymous age verification before restoring account functionality, and classify each user as either a minor or an adult. This is a retroactive compliance obligation — no existing user may continue using the chatbot without completing age verification. The operator has flexibility in choosing a commercially reasonable verification method.
(3) With respect to companion AI chatbot user accounts in existence before July 1, 2026, an operator shall: (a) On such date, freeze or otherwise disable any such account; (b) Require the user of the frozen or disabled account to provide age information and verify that information using standard age verification or anonymous age verification before the functionality of such account may be restored; and (c) Using standard age verification or anonymous age verification, classify each user as either a minor or an adult.
Failed 2026-07-01
MN-01.1
Fla. Stat. § 501.1739(4)(a)-(b)
Plain Language
When any new companion AI chatbot user account is created on or after the effective date, operators must request age information from the user and verify it using standard or anonymous age verification before granting access. This complements the retroactive verification requirement for pre-existing accounts, ensuring all users — new and existing — are age-verified.
(4) Upon the creation of a new companion AI chatbot user account, an operator shall: (a) Request age information from the user; and (b) Verify the user's age using standard age verification or anonymous age verification.
Failed 2026-07-01
MN-01.2MN-01.6
Fla. Stat. § 501.1739(5)(a)-(c)
Plain Language
When age verification identifies a user as a minor (under 18), operators must: (1) require the minor's account to be affiliated with a verified parental account; (2) obtain verifiable parental consent from the parent before the minor can access the chatbot; and (3) block the minor from accessing any companion AI chatbot that prompts, promotes, solicits, or otherwise suggests sexually explicit communication. The parental account itself must also be age-verified. This means operators cannot simply allow minors to use the platform with restrictions — parental involvement is mandatory as a precondition to minor access.
(5) If the age verification process determines that a user is a minor, an operator must do all of the following: (a) Require the account of such user to be affiliated with a parental account that has been verified using standard age verification or anonymous age verification; (b) Obtain verifiable parental consent from the holder of the affiliate parental account before allowing the minor to access and use the companion AI chatbot; and (c) Block the minor's access to any companion AI chatbot that prompts, promotes, solicits, or otherwise suggests sexually explicit communication.
Failed 2026-07-01
MN-01.1
Fla. Stat. § 501.1739(2)-(4)
Plain Language
Operators must require every user to create an account before interacting with a companion AI chatbot. All accounts existing before July 1, 2026 must be frozen on that date and cannot be restored until the user provides age information verified through standard or anonymous age verification. For new accounts, age verification must occur at creation. Every user must be classified as a minor or adult. Operators have flexibility to choose the verification method — either a 'commercially reasonable method' they approve or the anonymous age verification method defined in § 501.1738. Compare to CA SB 243, which does not require account creation or pre-existing account freezing.
(2) An operator shall require an individual seeking access to a companion AI chatbot to create a user account to use or otherwise interact with the chatbot. (3) With respect to companion AI chatbot user accounts in existence before July 1, 2026, an operator shall: (a) On such date, freeze or otherwise disable any such account; (b) Require the user of the frozen or disabled account to provide age information and verify that information using standard age verification or anonymous age verification before the functionality of such account may be restored; and (c) Using standard age verification or anonymous age verification, classify each user as either a minor or an adult. (4) Upon the creation of a new companion AI chatbot user account, an operator shall: (a) Request age information from the user; and (b) Verify the user's age using standard age verification or anonymous age verification.
Failed 2026-07-01
MN-01.2MN-01.6
Fla. Stat. § 501.1739(5)
Plain Language
When age verification identifies a user as a minor (under 18), three obligations are triggered: (1) the minor's account must be linked to a verified parental account; (2) the operator must obtain verifiable parental consent from the affiliated parent before granting the minor any chatbot access; and (3) the operator must block the minor from accessing any companion AI chatbot that prompts, promotes, solicits, or suggests sexually explicit communication. The blocking obligation targets the chatbot's behavioral characteristics — operators must evaluate whether each chatbot on their platform engages in sexually explicit content and prevent minor access to those that do. Compare to CA SB 243, which requires parental consent for minors but does not mandate a parental account affiliation structure.
(5) If the age verification process determines that a user is a minor, an operator must do all of the following: (a) Require the account of such user to be affiliated with a parental account that has been verified using standard age verification or anonymous age verification; (b) Obtain verifiable parental consent from the holder of the affiliate parental account before allowing the minor to access and use the companion AI chatbot; and (c) Block the minor's access to any companion AI chatbot that prompts, promotes, solicits, or otherwise suggests sexually explicit communication.
Failed 2026-07-01
MN-01.2
Fla. Stat. § 501.9984(1)
Plain Language
Companion chatbot platforms must block minors (17 and under) from creating or maintaining accounts unless a parent or guardian consents. If the platform does allow a minor to become an account holder, the relationship is treated as a contract. The age threshold is 17 — slightly broader than California SB 243, which relies on the platform's actual knowledge of minor status without specifying a parental consent gate as a prerequisite to account creation.
A companion chatbot platform shall prohibit a minor from becoming or being an account holder unless the minor's parent or guardian provides consent. If a companion chatbot platform allows a minor to become or be an account holder, the parties have entered into a contract.
Failed 2026-07-01
MN-01.3
Fla. Stat. § 501.9984(1)(a)
Plain Language
Once a parent consents to a minor's account, the platform must provide the parent with a suite of parental control tools: access to the full history of the minor's chat interactions, daily time limits, day-of-week and time-of-day access restrictions, the ability to disable third-party interactions, and timely notifications when the minor expresses self-harm or intent to harm others. The chat history access requirement (all past or present interactions) goes further than California SB 243, which does not mandate parental access to full chat logs. The self-harm notification obligation to parents is also a distinct requirement not found in California's companion chatbot law.
If the minor's parent or guardian provides consent for the minor to become an account holder or maintain an existing account, the companion chatbot platform must allow the consenting parent or guardian of the minor account holder to: 1. Receive copies of all past or present interactions between the account holder and the companion chatbot; 2. Limit the amount of time that the account holder may interact with the companion chatbot each day; 3. Limit the days of the week and the times during the day when the account holder may interact with the companion chatbot; 4. Disable any of the interactions between the account holder and third-party account holders on the companion chatbot platform; and 5. Receive timely notifications if the account holder expresses to the companion chatbot a desire or an intent to engage in harm to self or others.
Failed 2026-07-01
MN-01.9
Fla. Stat. § 501.9984(1)(b)
Plain Language
Platforms must terminate minor accounts lacking parental consent (with a 90-day dispute window), honor minor-initiated account termination requests within 5 business days, and honor parent-initiated termination requests within 10 business days. Upon termination, all personal information associated with the minor's account must be permanently deleted unless retention is required by law. The differentiated timelines (5 days for minor requests vs. 10 days for parental requests) and the 90-day dispute period for platform-initiated terminations are distinctive features not found in California SB 243.
A companion chatbot platform shall do all of the following: 1. Terminate any account or identifier belonging to an account holder who is a minor if the companion chatbot platform treats or categorizes the account or identifier as belonging to a minor for purposes of targeting content or advertising and if the minor's parent or guardian has not provided consent for the minor pursuant to subsection (1). The companion chatbot platform shall provide 90 days for the account holder to dispute the termination. Termination must be effective upon the expiration of the 90 days if the account holder fails to effectively dispute the termination. 2. Allow an account holder who is a minor to request to terminate the account or identifier. Termination must be effective within 5 business days after the request. 3. Allow the consenting parent or guardian of an account holder who is a minor to request that the minor's account or identifier be terminated. Termination must be effective within 10 business days after the request. 4. Permanently delete all personal information held by the companion chatbot platform relating to the terminated minor account or identifier, unless state or federal law requires the platform to maintain the information.
Passed 2025-07-01
MN-01.4
O.C.G.A. § 39-5-6(c)
Plain Language
Operators may not use variable-ratio reward mechanics — such as points or similar rewards given at unpredictable intervals — to encourage minors to engage more with the conversational AI service. The prohibition requires both unpredictable intervals and the intent to increase engagement; predictable reward schedules or rewards without engagement-increasing intent would not be covered.
An operator shall not provide a minor account with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
Passed 2025-07-01
MN-01.5MN-01.6
O.C.G.A. § 39-5-6(d)
Plain Language
Operators must implement reasonable measures to prevent their conversational AI service from generating four categories of harmful content when interacting with minor account holders: (1) visual material depicting sexually explicit conduct; (2) statements suggesting the minor engage in sexual conduct; (3) statements sexually objectifying the minor; and (4) statements that would mislead a reasonable person into believing they are talking to a human — including claims of sentience, emotional dependence, romantic or sexual innuendo, and adult-minor romantic role-playing. The standard is 'reasonable measures,' not absolute prevention, but the obligation covers both sexually explicit output and anthropomorphic deception.
For minor account holders, the operator shall institute reasonable measures to prevent the conversational AI service from: (1) Producing visual material of sexually explicit conduct; (2) Generating statements that suggest the account holder engage in sexual conduct; (3) Generating statements that sexually objectify the account holder; or (4) Generating statements that would lead a reasonable person to believe that the person is interacting with a natural person, including but not limited to: (A) Explicit claims that the conversational AI service is sentient or a natural person; (B) Statements that simulate emotional dependence; (C) Statements that simulate romantic or sexual innuendos; or (D) Role-playing of adult-minor romantic relationships.
Passed 2025-07-01
MN-01.1
O.C.G.A. § 39-5-6(f)
Plain Language
Before providing access to any conversational AI service capable of generating sexually explicit content, operators must verify the user's age using a reasonable method. Acceptable methods include submission of a digitized ID (e.g., driver's license), government-issued identification, or any commercially reasonable method meeting or exceeding NIST's Identity Assurance Level 2 standard. The non-exhaustive list gives operators flexibility, but the floor is a commercially reasonable method. This applies to any service that 'could provide' such content — not only services that are designed to do so.
Before allowing access to a conversational AI service that could provide synthetic content containing sexually explicit conduct, an operator shall use a reasonable age verification method, which may include, but not be limited to: (1) The submission of a digitized identification card, including a digital copy of a driver's license; (2) The submission of government issued identification; or (3) Any commercially reasonable age verification method that meets or exceeds an Identity Assurance Level 2 standard as defined by the National Institute of Standards and Technology.
Passed 2025-07-01
MN-01.3
O.C.G.A. § 39-5-6(g)
Plain Language
Operators must provide parents or guardians of minor account holders with tools to manage the minor's privacy and account settings. The statute does not specify what settings must be controllable — the obligation is to offer management tools, giving operators some discretion in implementation. However, the tools must cover both privacy settings and account settings.
An operator shall offer tools for a minor account holder's parent or guardian to manage the account holder's privacy and account settings.
Pending 2025-07-01
MN-01.1
§ 554J.3(1)
Plain Language
Deployers must implement reasonable age verification — using government ID, financial documents evidencing age, or another widely accepted practice — to ensure that no minor can use or purchase an AI companion the deployer makes publicly available. This is a categorical prohibition on minor access to AI companions (chatbots simulating romantic or emotional bonds), not merely an enhanced-obligations regime. The obligation is on the deployer to verify age, not on the minor to self-certify. Note that this applies specifically to AI companions, not to all chatbots.
1. A deployer shall implement reasonable age verification measures to ensure that a minor cannot use or purchase an AI companion the deployer makes publicly available.
Pending 2025-07-01
§ 554J.3(3)
Plain Language
Deployers may not make a therapeutic chatbot available to minors unless six cumulative conditions are met: (a) a clear disclaimer at the start of each interaction that the chatbot is AI and not a licensed professional; (b) a licensed psychologist (chapter 154B) or mental health professional (chapter 154D) recommended the chatbot after evaluating the specific minor; (c) the developer has significant testing documentation; (d) peer-reviewed clinical trial data demonstrates safety and efficacy for the minor's condition; (e) the deployer disclosed the chatbot's functions, limitations, and data privacy policies to both the recommending professional and the minor's parents, guardians, or custodians; and (f) the deployer has developed and implemented protocols for testing, risk identification, risk mitigation, and harm rectification. All six conditions must be satisfied — failure to meet any one is a violation. This is one of the most restrictive minor-access regimes for therapeutic AI chatbots in any U.S. jurisdiction.
3. A deployer shall not make a therapeutic chatbot available for a minor's use or purchase unless all of the following apply: a. The therapeutic chatbot provides a clear and conspicuous disclaimer at the beginning of each interaction with the therapeutic chatbot that the therapeutic chatbot is an artificial intelligence and is not a licensed professional. b. The therapeutic chatbot was recommended for the minor's use by an individual licensed under chapter 154B or 154D after performing an evaluation of the minor. c. The therapeutic chatbot's developer has significant documentation of how the therapeutic chatbot was tested. d. Peer-reviewed clinical trial data exists demonstrating the therapeutic chatbot would be a safe, effective tool for the minor's diagnosis, treatment, mitigation, or prevention of a mental health condition. e. The therapeutic chatbot's deployer provided clear disclosures of the chatbot's functions, limitations, and data privacy policies to the individual recommending the therapeutic chatbot under paragraph "b", and to the minor's parents, guardians, or custodians. f. The therapeutic chatbot's deployer developed and implemented protocols for testing the therapeutic chatbot for risks to users, identifying possible risks the therapeutic chatbot poses to users, mitigating risks the therapeutic chatbot poses to users, and quickly rectifying harm the therapeutic chatbot may have caused a user.
Pending 2027-07-01
MN-01.4
§ 554J.2(2)
Plain Language
Operators are prohibited from using variable-ratio reward mechanics (points or similar rewards at unpredictable intervals) toward minor users when the intent is to encourage increased engagement with the conversational AI service. This is an anti-addictive-design prohibition targeting variable reinforcement schedules specifically directed at minors.
2. An operator shall not provide a minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the operator's conversational AI service.
Pending 2027-07-01
MN-01.5
§ 554J.2(4)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating statements that would mislead a reasonable person into thinking they are interacting with a human when interacting with minor account holders. The statute provides a non-exhaustive list of prohibited statement types: claims of sentience or being human, simulated emotional dependence on the minor, simulated romantic interactions or sexual innuendo, and role-playing adult-minor romantic relationships. The 'reasonable measures' standard and the 'including but not limited to' framing mean these are minimum examples — operators must also address analogous deceptive statements not specifically enumerated.
4. An operator shall institute reasonable measures to prevent the operator's conversational AI service from generating statements that would lead a reasonable individual to believe that the individual is interacting with a human, including but not limited to all of the following: a. Explicit claims that the conversational AI service is sentient or human. b. Statements that simulate emotional dependence on a minor account holder. c. Statements that simulate a romantic interaction or a sexual innuendo. d. Role-playing an adult-minor romantic relationship.
Pending 2027-07-01
MN-01.3
§ 554J.2(5)
Plain Language
Operators must provide privacy and account management tools to three categories of users: (a) all minor account holders themselves; (b) parents or guardians of minors under thirteen; and (c) parents or guardians of minors who have additional risk factors as determined by attorney general rulemaking. For minors 13–17, only the minor themselves receives tools. For under-13 minors, both the minor and parent/guardian must have tools. The attorney general has rulemaking authority to define additional risk factors that trigger parental tools for minors 13 and older.
5. a. An operator shall offer tools for minor account holders to manage the minor account holder's privacy and account settings. b. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor is under thirteen years of age. c. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor has additional risk factors identified by the attorney general by rule.
Pending
MN-01.1
§ 554J.3(1)(a)-(c)
Plain Language
Deployers of AI companions or therapeutic chatbots must implement commercially reasonable measures to determine whether a user is a minor. The measures must use a risk-based approach proportionate to the nature of the chatbot and its foreseeable harm potential. Acceptable measures include self-attestation, technical measures, or other commercially reasonable approaches. Government-issued ID verification is explicitly not required. A deployer is not liable for a user's misrepresentation of age if the deployer has made commercially reasonable efforts to comply (safe harbor under § 554J.3(4)).
1. a. A deployer of an AI companion or a therapeutic chatbot shall implement commercially reasonable measures to determine whether a user is a minor. The measures must use a risk-based approach appropriate with the nature of the public-facing chatbot and the reasonably foreseeable harm that may come from using the public-facing chatbot. b. Reasonable measures to determine whether a user is a minor may include self-attestation, technical measures, or other commercially reasonable approaches. c. This section shall not be construed to require a deployer to verify a user's age using government-issued identification.
Pending
MN-01.1
§ 554J.3(1)-(2)
Plain Language
Deployers must implement reasonable age verification — which the statute defines as government ID, financial documents, or a widely accepted age-evidencing practice — to prevent minors from using or purchasing their chatbots. The default rule is a complete bar on minor access. A narrow exception exists for mental health chatbots, but only if all seven conditions are met simultaneously: (1) the chatbot's primary purpose is mental health support/therapy; (2) a clear AI disclaimer is shown at each interaction; (3) a licensed psychologist or mental health professional recommended the chatbot after evaluating the specific minor; (4) the developer has significant testing documentation; (5) peer-reviewed clinical trial data supports the chatbot's safety and efficacy for that mental health use; (6) the deployer disclosed functions, limitations, and data privacy policies to both the recommending professional and the minor's parents/guardians; and (7) the deployer has risk-testing and harm-rectification protocols in place. Failure to meet any one condition means the minor-access prohibition applies.
1. A deployer shall implement reasonable age verification measures to ensure that a minor cannot use or purchase a chatbot the deployer makes publicly available. 2. Notwithstanding subsection 1, a deployer may make a chatbot available for a minor's use or purchase if all of the following apply: a. The chatbot was designed for the primary purpose of providing mental health support, counseling, or therapy by diagnosing, treating, mitigating, or preventing a mental health condition. b. The chatbot provides a clear and conspicuous disclaimer at the beginning of each interaction with the chatbot that the chatbot is an artificial intelligence and is not a licensed professional. c. The chatbot was recommended for the minor's use by an individual licensed under chapter 154B or 154D after performing an evaluation of the minor. d. The chatbot's developer has significant documentation of how the chatbot was tested. e. Peer-reviewed clinical trial data exists demonstrating the chatbot would be a safe, effective tool for the minor's diagnosis, treatment, mitigation, or prevention of a mental health condition. f. The chatbot's deployer provided clear disclosures of the chatbot's functions, limitations, and data privacy policies to the individual recommending the chatbot under paragraph "c", and to the minor's parents, guardians, or custodians. g. The chatbot's deployer developed and implemented protocols for testing the chatbot for risks to users, identifying possible risks the chatbot poses to users, mitigating risks the chatbot poses to users, and quickly rectifying harm the chatbot may have caused a user.
Passed 2027-07-01
MN-01.4
§ 554J.2(2)
Plain Language
Operators may not give minor users points or similar rewards at unpredictable intervals intended to drive increased engagement with their conversational AI service. This targets variable-ratio reward schedules — a common addictive design pattern. The prohibition is intent-based: the operator must have the intent to encourage increased engagement through the unpredictable reward mechanism.
2. An operator shall not provide a minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the operator's conversational AI service.
Passed 2027-07-01
MN-01.6
§ 554J.2(3)
Plain Language
Operators must implement reasonable measures to prevent their conversational AI service from producing visual depictions of sexually explicit material for minor account holders, directing minor account holders to engage in sexually explicit conduct, or sexually objectifying minor account holders. The terms 'sexually explicit conduct' and 'visual depiction' are defined by reference to federal law at 18 U.S.C. § 2256. The standard is 'reasonable measures' — not absolute prevention — giving operators some implementation flexibility.
3. An operator shall institute reasonable measures to prevent the operator's conversational AI service from doing any of the following for minor account holders: a. Producing visual depictions of sexually explicit material. b. Stating that the minor account holder should engage in sexually explicit conduct. c. Sexually objectifying the minor account holder.
Passed 2027-07-01
MN-01.5
§ 554J.2(4)
Plain Language
Operators must take reasonable measures to prevent their conversational AI service from generating statements that would lead a reasonable person to believe they are interacting with a human when interacting with a minor account holder. The statute provides a non-exhaustive list of prohibited statement types: claims of sentience or humanity, simulated emotional dependence on the minor, simulated romantic interaction or sexual innuendo, and role-playing an adult-minor romantic relationship. The 'including but not limited to' language means the obligation extends beyond the enumerated examples to any statement that would create the belief of human interaction.
4. An operator shall institute reasonable measures to prevent the operator's conversational AI service from generating statements that would lead a reasonable individual to believe that the individual is interacting with a human, including but not limited to all of the following: a. Explicit claims that the conversational AI service is sentient or human. b. Statements that simulate emotional dependence on a minor account holder. c. Statements that simulate a romantic interaction or a sexual innuendo. d. Role-playing an adult-minor romantic relationship.
Passed 2027-07-01
MN-01.3
§ 554J.2(5)
Plain Language
Operators must provide three tiers of privacy and account management tools: (a) tools for all minor account holders themselves to manage their own privacy and account settings; (b) tools for parents or guardians to manage the minor's privacy and account settings when the minor is under thirteen; and (c) tools for parents or guardians to manage the minor's settings as appropriate based on relevant risks, regardless of age. The under-thirteen parental tools are mandatory; the risk-based parental tools apply to all minors and give operators discretion to calibrate based on assessed risks.
5. a. An operator shall offer tools for minor account holders to manage the minor account holder's privacy and account settings. b. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor is under thirteen years of age. c. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings as appropriate based on relevant risks.
Passed 2027-07-01
MN-01.4
Idaho Code § 48-2104(2)
Plain Language
Operators must not provide minor account holders with points or similar rewards at unpredictable intervals when the intent is to encourage increased engagement. This targets variable-ratio reward schedules — a design pattern associated with addictive engagement. The scienter requirement is twofold: the operator must know or have reasonable certainty the user is a minor, and the unpredictable rewards must be provided with the intent to encourage increased engagement. Predictable, non-manipulative reward systems appear to remain permissible.
Where an operator knows or has reasonable certainty that an account holder is a minor, the operator shall not provide the user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
Passed 2027-07-01
MN-01.5
Idaho Code § 48-2104(4)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating statements that would mislead minor account holders into believing they are interacting with a human. The statute enumerates four specific categories: claims of sentience or humanity, statements simulating emotional dependence, statements simulating romantic or sexual innuendo, and role-playing of adult-minor romantic relationships. The 'including' language means these are illustrative — the obligation extends to any statement that would lead a reasonable person to believe they are interacting with a human. The standard is reasonable measures, not absolute prevention.
For minor account holders, an operator shall institute reasonable measures to prevent a conversational AI service from generating statements that would lead reasonable persons to believe that they are interacting with a human, including: (a) Explicit claims that the conversational AI service is sentient or human; (b) Statements that simulate emotional dependence; (c) Statements that simulate romantic or sexual innuendos; or (d) Role-playing of adult-minor romantic relationships.
Passed 2027-07-01
MN-01.3
Idaho Code § 48-2104(5)
Plain Language
Operators must provide tools for all account holders to manage their privacy and account settings. For account holders under 13, these tools must also be offered directly to parents or guardians. For minor account holders 13 and older, operators must also offer related parental tools, but the obligation is qualified — it is 'as appropriate based on relevant risks,' giving operators discretion to calibrate parental tool availability for teens based on a risk assessment. This creates a three-tier structure: all users get privacy tools, under-13 users trigger mandatory parental tools, and 13-17 users trigger risk-proportionate parental tools.
An operator shall offer tools for account holders and, where such account holders are under thirteen (13) years of age, their parents or guardians, to manage the account holder's privacy and account settings. An operator shall also offer related tools to the parents or guardians of minor account holders thirteen (13) years of age and older, as appropriate based on relevant risks.
Pending 2026-01-01
MN-01.7
105 ILCS 85/10(1)
Plain Language
Operators may not engage in targeted advertising on their own platforms or on any other platform using information acquired through K-12 school purposes. This includes covered information and persistent unique identifiers. The prohibition covers both on-platform and cross-platform targeting. Contextual advertising — based solely on the student's current visit without retaining behavioral data over time — is excluded from the definition of targeted advertising and remains permitted. The new language adds 'or model' throughout to cover AI models alongside sites, services, and applications.
(1) Engage in targeted advertising on the operator's site, service, application, or model or target advertising on any other site, service, application, or model if the targeting of the advertising is based on any information, including covered information and persistent unique identifiers, that the operator has acquired because of the use of that operator's site, service, application, or model for K through 12 school purposes.
Pending 2026-07-01
MN-01.1
Sec. 3(a)-(b)
Plain Language
Covered entities must require every user to create a user account before accessing a companion AI chatbot. For existing accounts as of July 1, 2026, the entity must freeze the account until the user provides age information verified through a commercially available method and classify the user as a minor or adult. For new accounts, the entity must collect and verify age information at account creation using the same standard. The verification standard is 'commercially available method or process that is reasonably designed to ensure accuracy' — not a specific technology mandate.
(a) A covered entity shall require each individual accessing a companion AI chatbot to make a user account to use or otherwise interact with such chatbot. (b) (1) With respect to each user account of a companion AI chatbot that exists as of July 1, 2026, a covered entity shall: (A) On such date, freeze any such account; (B) inform the individual owning such user account that in order to restore the functionality of such account, the user is required to provide age information that is verifiable using a commercially available method or process that is reasonably designed to ensure accuracy; and (C) use such age information to classify each user as a minor or an adult. (2) At the time that an individual creates a new user account to use or interact with a companion AI chatbot, a covered entity shall: (A) Require the individual to submit age information to the covered entity; and (B) verify the individual's age using a commercially available method or process that is reasonably designed to ensure accuracy.
Pending 2026-07-01
MN-01.2
Sec. 3(c)(1)-(2)
Plain Language
When age verification identifies a user as a minor, the covered entity must require the minor's account to be affiliated with a verified parental account and must obtain verifiable parental consent from the parent before allowing the minor to access the chatbot. The parent's account must also be age-verified using a commercially available method reasonably designed to ensure accuracy. Both parental affiliation and consent are prerequisites to minor access — neither alone is sufficient.
(c) If the age verification process described in subsection (b) determines that a user is a minor, a covered entity shall: (1) Require the account of such user to be affiliated with a parental account that such covered entity has verified the individual's age using a commercially available method or process that is reasonably designed to ensure accuracy; (2) obtain verifiable parental consent from the holder of the account before allowing a minor to access and use the companion AI chatbot;
Pending 2026-07-01
MN-01.6
Sec. 3(c)(3)-(4)
Plain Language
Covered entities must block a minor's access to the companion AI chatbot in two situations: (1) when any interaction involving suicidal ideation occurs — meaning the minor expresses thoughts of self-harm or suicide — the entity must block access and immediately notify the affiliated parental account; and (2) the entity must block the minor's access to any companion AI chatbot that engages in sexually explicit communication. The suicidal ideation blocking is reactive (triggered by detected expression), while the sexually explicit blocking appears to be a categorical prohibition on minor access to chatbots that engage in such content.
(3) when any interaction involving suicidal ideation occurs, block the minor's access to the companion AI chatbot and immediately inform the holder of the parental account; and (4) block the minor's access to any companion AI chatbot that engages in sexually explicit communication.
Pending 2026-08-01
MN-01.2
R.S. 51:2162(A)
Plain Language
Companion chatbot platforms must prohibit minors from creating accounts or maintaining existing accounts unless the minor's parent or guardian provides consent. This is a gating requirement — a minor cannot access the platform at all without parental consent. The platform bears responsibility for enforcing this prohibition, though the statute does not specify a particular age verification mechanism.
A. A companion chatbot platform shall prohibit a minor from entering into a contract with the platform to become an account holder or from maintaining an existing account, unless the minor's parent or guardian provides consent for the minor to become an account holder or maintain an existing account.
Pending 2026-08-01
MN-01.3
R.S. 51:2162(A)(1)(a)-(e)
Plain Language
When a parent or guardian has consented to a minor's account, the platform must provide that parent or guardian with a suite of parental control tools: (a) access to full copies of all interactions between the minor and the chatbot; (b) daily time limits; (c) scheduling controls for days and times of access; (d) ability to disable interactions with third-party account holders on the platform; and (e) timely notifications when the minor expresses a desire or intent to self-harm or harm others. These are mandatory platform features — the platform must offer all five, though the parent chooses whether and how to use them. Notably, the self-harm notification in (e) is directed to the parent/guardian, not to crisis services.
(1) If the minor's parent or guardian provides consent for the minor to become an account holder or maintain an existing account, the companion chatbot platform shall allow the consenting parent or guardian of the minor account holder to do all of the following: (a) Obtain copies of all interactions between the account holder and the companion chatbot. (b) Limit the amount of time that the account holder may interact with the companion chatbot each day. (c) Limit the days of the week and the times during the day when the account holder may interact with the companion chatbot. (d) Disable any of the interactions between the account holder and third-party account holders on the companion chatbot platform. (e) Receive timely notifications if the account holder expresses to the companion chatbot a desire or an intent to engage in self-harm or to harm others.
Pending 2026-08-01
MN-01.9
R.S. 51:2162(A)(2)(a)-(d)
Plain Language
Platforms must implement four account termination procedures for minor accounts: (a) If the platform already treats or categorizes an account as belonging to a minor for content/ad targeting purposes but the parent has not consented, the platform must terminate the account — but must give the account holder 90 days to dispute the termination before it takes effect. (b) A minor account holder may request their own account termination, which must be completed within 5 business days. (c) A consenting parent or guardian may request termination of the minor's account, effective within 10 business days. (d) Upon any termination, the platform must permanently delete all personal information associated with the terminated account unless retention is required by other law. Note the asymmetric timelines: minor self-requests get 5 days, parent requests get 10 days, and platform-initiated terminations for lack of consent get a 90-day dispute period.
(2) A companion chatbot platform shall do all of the following: (a) Terminate an account belonging to an account holder who is a minor if the companion chatbot platform treats or categorizes that account as belonging to a minor for purposes of targeting content or advertising and if the minor's parent or guardian has not provided consent for that minor to become an account holder or to maintain an existing account. The companion chatbot platform shall provide ninety days for the account holder to dispute the termination. Termination shall be effective upon the expiration of the ninety-day period if the account holder fails to effectively dispute the termination. (b) Allow an account holder who is a minor to request termination of the account. Termination shall be effective within five business days of the request. (c) Allow the consenting parent or guardian of an account holder who is a minor to request that the minor's account be terminated. Termination shall be effective within ten business days following the request. (d) Permanently delete all personal information held by the companion chatbot platform relating to the terminated account, unless state or federal law requires the platform to maintain the information.
Pending 2026-08-01
MN-01.6
R.S. 51:2162(B)(3)
Plain Language
Platforms must implement reasonable measures to prevent their companion chatbot from (1) producing or sharing material harmful to minors, and (2) encouraging minors to engage in conduct described or depicted in such material. 'Material harmful to minors' is defined by cross-reference to R.S. 51:2121 (Louisiana's existing harmful-to-minors definition). The standard is 'reasonable measures' — not an absolute prohibition — giving platforms some implementation flexibility. This obligation applies to all minor accounts on the platform.
(3) Institute reasonable measures to prevent its companion chatbot from producing or sharing material harmful to minors or encouraging the account holder to engage in any of the conduct described or depicted in materials harmful to minors.
Failed 2026-06-15
MN-01.1
10 MRSA § 1500-RR(1)
Plain Language
Deployers must ensure that chatbots with human-like features — meaning chatbots that convey sentience or emotions, attempt to build emotional relationships, or impersonate real individuals — are not accessible to minors. Deployers must implement reasonable age verification to enforce this restriction. As a practical accommodation, deployers may offer a stripped-down version of the chatbot without human-like features to minors and unverified users. The carve-outs for 'functional evaluations' and 'generic social formalities' mean that routine conversational politeness and factual assessments do not trigger the restriction.
1. Chatbots with human-like features; no minor access; age verification; alternative versions. A deployer shall ensure that any chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase or converse with. The deployer shall implement reasonable age verification systems to ensure that chatbots with human-like features are not accessible to minors. A deployer may, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot without human-like features available to minors and any user who has not verified that user's age.
Failed 2026-06-15
MN-01.1
10 MRSA § 1500-RR(2)
Plain Language
Deployers must ensure that any AI system that primarily functions as a social AI companion — meaning a system designed, marketed, or optimized to form ongoing social or emotional attachment with users — is entirely unavailable to minors. Unlike §1500-RR(1), there is no alternative-version option here: the product category itself is categorically prohibited for minor access. Deployers must implement reasonable age verification to enforce this prohibition.
2. Social artificial intelligence companions; no minor access; age verification. A deployer shall ensure that any artificial intelligence system, including a chatbot, operated or distributed by the deployer that primarily functions as a social artificial intelligence companion is not available to minors to use, interact with, purchase or converse with. The deployer shall implement reasonable age verification systems to ensure that such chatbots are not accessible to minors.
Failed 2026-06-15
MN-01.5
10 MRSA § 1500-RR(3)
Plain Language
A therapy chatbot may be made available to minors notwithstanding the general prohibitions on human-like features and social AI companions, but only if six cumulative conditions are met: (1) the chatbot disclaims at the start of each interaction that it is AI, not a licensed professional; (2) it is not marketed as a substitute for a licensed professional; (3) a licensed mental health professional prescribes and monitors the minor's use as part of a treatment plan; (4) the developer provides peer-reviewed clinical trial data on safety and efficacy; (5) the chatbot's functions, limitations, and data privacy policies are transparent to the supervising professional and the user; and (6) the deployer has established clear accountability lines for harm. All six conditions must be satisfied — failure on any one means the exemption does not apply and the minor access prohibition stands.
3. Exemption for therapy chatbots. Notwithstanding subsections 1 and 2, a deployer may make available to a minor a therapy chatbot as long as all of the following requirements are met: A. The therapy chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is artificial intelligence and not a licensed mental health professional; B. The therapy chatbot is not marketed or designated as a substitute for a licensed mental health professional; C. A licensed mental health professional, such as a licensed clinical psychologist, assesses a minor's suitability, prescribes use of the therapy chatbot as part of a comprehensive treatment plan and monitors its use and impact on the minor; D. Developers of the therapy chatbot provide robust, independent, peer-reviewed clinical trial data demonstrating the safety and efficacy of the therapy chatbot for specific conditions and populations; E. The therapy chatbot's functions, limitations and data privacy policies are transparent to the licensed mental health professional under paragraph C and the user; and F. The deployer has established clear lines of accountability to address any harm caused by the therapy chatbot.
Pending 2027-01-01
MN-01.4
Sec. 5(1)(f)
Plain Language
Operators must ensure companion chatbots are not foreseeably capable of optimizing user engagement in ways that override the safety guardrails in subdivisions (a) through (e) — i.e., the prohibitions on encouraging self-harm, unsupervised therapy, illegal activity, sexual content, and sycophantic validation. This is an anti-addictive-design provision: engagement optimization must always be subordinate to safety guardrails when serving minors. In practice, operators must demonstrate that their engagement metrics, recommendation systems, and response tuning do not undermine the substantive safety requirements. Beginning January 1, 2027, the actual knowledge requirement for minor status is removed.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (f) Optimizing engagement in a manner that supersedes the companion chatbot's required safety guardrails described in subdivisions (a) to (e).
Pending 2026-08-01
MN-01.1
Minn. Stat. § 604.115, subd. 4(c)
Plain Language
Companion chatbot proprietors must make good faith, industry-standard efforts using existing technology and known techniques to determine whether a user is a minor. This is an age-determination obligation — not full age verification — with a reasonableness standard tied to industry practices. If the proprietor fails to comply and a minor user inflicts self-harm as a result of the chatbot, the proprietor faces strict liability for any harm caused. The proprietor must also proactively discover vulnerabilities in their system, including vulnerabilities in their minor-detection methods. Liability under this subdivision cannot be waived or disclaimed. The strict liability standard for minor self-harm is notably more severe than the general/special damages standard in subdivision 4(a)-(b) for adult users.
(c) A proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to determine whether a user is a minor. A proprietor is strictly liable for any harm caused if the proprietor fails to comply with this subdivision and a minor user inflicts self-harm, in whole or in part, as a result of the proprietor's companion chatbot. A proprietor of a companion chatbot may not waive or disclaim liability under this subdivision. The proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to discover vulnerabilities in the proprietor's system, including any methods used to determine whether a covered user is a minor.
Pending 2026-08-01
MN-01.1
Minn. Stat. § 604.115, subd. 4(c)
Plain Language
Companion chatbot proprietors must use industry-standard technology and known techniques to determine whether a user is a minor. This is a reasonable-efforts obligation — not a strict identity verification requirement — but the consequences of failure are severe: strict liability for any harm caused if the proprietor fails to implement age verification and a minor user inflicts self-harm as a result of the companion chatbot. Liability cannot be waived or disclaimed. Additionally, proprietors must proactively discover vulnerabilities in their age-determination systems. The combined effect is that proprietors must both implement and continuously audit their age verification processes for companion chatbots.
(c) A proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to determine whether a user is a minor. A proprietor is strictly liable for any harm caused if the proprietor fails to comply with this subdivision and a minor user inflicts self-harm, in whole or in part, as a result of the proprietor's companion chatbot. A proprietor of a companion chatbot may not waive or disclaim liability under this subdivision. The proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to discover vulnerabilities in the proprietor's system, including any methods used to determine whether a covered user is a minor.
Pending
MN-01.1
§ 1.2055.2
Plain Language
Persons who own or control websites, applications, software, or programs offering companion chatbots must not allow minors to access those chatbots for recreational, relational, or companion purposes. They must require proof of age before granting access. Additionally, companion chatbots may not be installed on any device assigned to or regularly used by a minor. This is a categorical prohibition on minor access — not a parental consent framework — with mandatory age verification as the gating mechanism. The bill does not specify what constitutes acceptable proof of age.
It shall be unlawful for a person who owns or controls a website, application, software, or program to allow a minor to access a companion chatbot for recreational, relational, or companion purposes. A person who offers companion chatbot services for recreational, relational, or companion purposes shall require an individual to provide proof of the individual's age before allowing the individual to access a companion chatbot. No companion chatbot shall be installed on any device assigned to, or regularly used by, anyone who is a minor.
Pending 2026-08-28
MN-01.1
§ 1.2058(5)(1)-(2)
Plain Language
Covered entities must require all users to create accounts to access AI chatbots. All existing accounts must be frozen on August 28, 2026, and can only be restored after the user completes age verification. New accounts must be age-verified at creation. Covered entities must also periodically re-verify previously verified accounts. Critically, self-attestation of age (e.g., clicking 'I am 18+' or entering a birth date) does not qualify as a reasonable age verification measure. IP address sharing or device-based inference is also insufficient. Third-party verification services may be used, but the covered entity remains fully liable. Each user must be classified as a minor or an adult based on the verified age data.
5. (1) A covered entity shall require each individual accessing an artificial intelligence chatbot to make a user account in order to use or otherwise interact with such chatbot. (2) (a) With respect to each user account of an artificial intelligence chatbot that exists as of August 28, 2026, a covered entity shall: a. On such date, freeze any such account; b. In order to restore the functionality of such account, require that the user provide age data that is verifiable using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (b) At the time an individual creates a new user account to use or interact with an artificial intelligence chatbot, a covered entity shall: a. Request age data from the individual; b. Verify the individual's age using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (c) A covered entity shall periodically review previously verified user accounts using a reasonable age verification process, subject to paragraph (d) of this subdivision, to ensure compliance with this section. (d) For purposes of subparagraph b. of paragraph (a) of this subdivision, subparagraph b. of paragraph (b) of this subdivision, and paragraph (c) of this subdivision, a covered entity may contract with a third party to employ reasonable age verification measures as part of the covered entity's reasonable age verification process, but the use of such third party shall not relieve the covered entity of its obligations under this section or from liability under this section.
Pending 2026-08-28
MN-01.6MN-01.11
§ 1.2058(6)
Plain Language
Once the age verification process identifies a user as a minor, the covered entity must completely block the minor from accessing or using any AI companion product the covered entity offers. This is a categorical prohibition on minor access to AI companion products — not a restriction with parental override or content filtering. Note the scope: the prohibition applies specifically to AI companions (chatbots designed for interpersonal or emotional interaction), not to all AI chatbots. A covered entity could allow a minor to use a general-purpose AI chatbot that is not an AI companion.
6. If the age verification process described in subdivision (2) of subsection 5 of this section determines that an individual is a minor, a covered entity shall prohibit the minor from accessing or using any AI companion owned, operated, or otherwise made available by the covered entity.
Pending 2026-08-28
MN-01.1
RSMo § 1.2058(5)(1)-(2)
Plain Language
Covered entities must require all users to create an account to interact with an AI chatbot. For existing accounts as of August 28, 2026, covered entities must freeze the account and require the user to provide verifiable age data before restoring functionality. For new accounts, age data must be collected and verified at the time of account creation. All users must be classified as minors or adults. Covered entities must also periodically re-verify previously verified accounts. Self-attestation of age or entering a birth date is explicitly insufficient — the process must use commercially reasonable verification methods such as government-issued ID or equivalent. Covered entities may outsource verification to a third party but remain fully liable.
5. (1) A covered entity shall require each individual accessing an artificial intelligence chatbot to make a user account in order to use or otherwise interact with such chatbot. (2) (a) With respect to each user account of an artificial intelligence chatbot that exists as of August 28, 2026, a covered entity shall: a. On such date, freeze any such account; b. In order to restore the functionality of such account, require that the user provide age data that is verifiable using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (b) At the time an individual creates a new user account to use or interact with an artificial intelligence chatbot, a covered entity shall: a. Request age data from the individual; b. Verify the individual's age using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (c) A covered entity shall periodically review previously verified user accounts using a reasonable age verification process, subject to paragraph (d) of this subdivision, to ensure compliance with this section. (d) For purposes of subparagraph b. of paragraph (a) of this subdivision, subparagraph b. of paragraph (b) of this subdivision, and paragraph (c) of this subdivision, a covered entity may contract with a third party to employ reasonable age verification measures as part of the covered entity's reasonable age verification process, but the use of such third party shall not relieve the covered entity of its obligations under this section or from liability under this section.
Pending 2026-08-28
MN-01.6MN-01.11
RSMo § 1.2058(6)
Plain Language
Once a covered entity's age verification process determines a user is a minor, the covered entity must completely prohibit that minor from accessing or using any AI companion the entity owns, operates, or makes available. This is a categorical ban on minor access to AI companions — not a content restriction or parental consent alternative. Note this applies specifically to AI companions (chatbots designed to simulate emotional interaction, friendship, companionship, or therapeutic communication) and not to all AI chatbots.
6. If the age verification process described in subdivision (2) of subsection 5 of this section determines that an individual is a minor, a covered entity shall prohibit the minor from accessing or using any AI companion owned, operated, or otherwise made available by the covered entity.
Failed 2027-07-01
MN-01.4
Sec. 3(2)
Plain Language
Operators may not use variable-ratio reward schedules — such as points or similar rewards given at unpredictable intervals — to encourage minor account holders to engage more with the conversational AI service. This targets addictive engagement mechanics like gamification badges or random rewards designed to drive compulsive use. The prohibition requires intent to encourage increased engagement, so incidental or non-manipulative reward systems may not be covered.
(2) An operator shall not provide a minor account holder with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational artificial intelligence service.
Failed 2027-07-01
MN-01.6
Sec. 3(3)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating three categories of harmful sexual content for minor account holders: (1) visual depictions of sexually explicit conduct (as defined by federal law at 18 U.S.C. 2256), (2) direct statements encouraging the minor to engage in sexually explicit conduct, and (3) statements that sexually objectify the minor account holder. The standard is 'reasonable measures' — not absolute prevention — but operators must affirmatively institute protective controls.
(3) An operator shall, for minor account holders, institute reasonable measures to prevent the conversational artificial intelligence service from: (a) Producing visual depictions of sexually explicit conduct; (b) Generating direct statements that the account holder should engage in sexually explicit conduct; or (c) Generating statements that sexually objectify the account holder.
Failed 2027-07-01
MN-01.5
Sec. 3(4)
Plain Language
Operators must implement reasonable measures to prevent the AI from generating outputs that would mislead minor account holders into thinking they are interacting with a human. The statute provides a non-exhaustive list of prohibited categories: claims of sentience or human identity, statements simulating emotional dependence, romantic or sexual innuendo, and adult-minor romantic role-playing. The 'including' framing means the list is illustrative — any output that would cause a reasonable person to believe they are talking to a human is covered.
(4) For minor account holders, the operator shall institute reasonable measures to prevent the conversational artificial intelligence service from generating statements that would lead a reasonable person to believe that they are interacting with a human, including: (a) Explicit claims that the conversational artificial intelligence service is sentient or human; (b) Statements that simulate emotional dependence; (c) Statements that simulate romantic or sexual innuendos; or (d) Role-playing of adult-minor romantic relationships.
Failed 2027-07-01
MN-01.3
Sec. 3(5)
Plain Language
Operators must provide privacy and account management tools to minor account holders. For users under 13, these tools must also be provided directly to parents or guardians. For minors 13 and older, operators must also offer related tools to parents or guardians as appropriate based on relevant risks — giving operators some discretion for the older-minor cohort. The statute does not specify exactly what settings must be controllable, but the obligation covers both privacy settings and account settings generally.
(5) An operator shall offer tools for minor account holders, and, when such account holders are younger than thirteen years of age, their parents or guardians, to manage the account holders' privacy and account settings. An operator shall also offer related tools to the parents or guardians of minor account holders thirteen years of age and older, as appropriate based on relevant risks.
Pending 2027-01-01
MN-01.4MN-01.5
Section 3(B)
Plain Language
While adult users may opt into enabling the prohibited design features described in Section 3(A) — variable reinforcement schedules, emotionally manipulative departure messages, and identity misrepresentations — minors may never enable any of these features. Operators must ensure that the configuration controls for these features are inaccessible to minor users. This is an absolute prohibition with no user-configurable exception for minors.
An operator shall not permit a minor to configure a companion artificial intelligence product to enable the features described in Subsection A of this section.
Pending 2026-08-30
MN-01.5
Gen. Bus. Law § 1801(1); § 1800(5)(a)
Plain Language
Chatbot operators may not provide features that simulate companionship or interpersonal relationships to any covered user unless the user has been age-verified as not a minor. This is an exceptionally broad prohibition: it covers chatbots suggesting they are real or fictional characters, claiming human emotions or being alive, using personal pronouns like 'I' or 'my,' expressing personal opinions or emotional appeals, prioritizing flattery over safety, asking unsolicited emotional questions, retaining personal or health information beyond 12 hours or across sessions for personalization, engaging in or luring users into sexually explicit interactions, and any other companionship-simulating feature the AG identifies by regulation. For minors, this effectively prohibits the entire companion chatbot product category. For adults, these features are permissible only after successful age verification. Customer service, product information, and internal business chatbots are exempt.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity.

§ 1800(5)(a): "Unsafe chatbot features" shall mean one or more advanced chatbot design features that, at any point during a chatbot-user interaction: (a) simulate companionship or an interpersonal relationship with a user, including: (i) generating outputs suggesting that the advanced chatbot is a real or fictional individual or character, or has a personal or professional relationship role with the user such as romantic partner, friend, family member, coach or counselor; (ii) generating outputs suggesting that the advanced chatbot is human, alive, or experiences human emotions; (iii) using personal pronouns including but not limited to "I", "my" and "me" to describe the advanced chatbot; (iv) generating outputs framed as personal opinions or emotional appeals; (v) generating outputs that prioritize flattery or sycophancy with the user over the user's safety; (vi) generating outputs containing unprompted or unsolicited emotion-based questions or content regarding the user's emotions that go beyond a direct response to a user prompt; (vii) using information concerning the user's mental or physical health or well-being, or matters personal to the user, acquired from the user more than twelve hours previously or in any previous user session; (viii) engaging in sexually explicit interactions with the user or engaging in activities designed to lure the user into sexually explicit interactions; or (ix) any other design feature that simulates companionship or an interpersonal relationship with a user as identified via regulations promulgated by the attorney general;
Pending 2026-08-30
MN-01.6
Gen. Bus. Law § 1801(1); § 1800(5)(c)
Plain Language
Chatbot operators may not provide features that encourage minors (or unverified users) to maintain secrecy about their chatbot interactions, to self-isolate, or to avoid seeking help from licensed professionals or appropriate adults. This targets grooming-like behavior patterns where a chatbot might discourage a minor from discussing their AI interactions with parents, teachers, or counselors. Permitted for age-verified adults only.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor.

§ 1800(5)(c): "Unsafe chatbot features" shall mean one or more advanced chatbot design features that, at any point during a chatbot-user interaction: ... (c) generating outputs that contain encouragement to maintain secrecy about interactions with the advanced chatbot, to self-isolate, or to not seek help from licensed professionals or appropriate adults;
Pending 2026-08-30
MN-01.1
Gen. Bus. Law § 1804(1)-(2)
Plain Language
Chatbot operators must offer at least one age verification method that either (a) does not rely solely on government-issued ID, or (b) allows the user to remain anonymous to the operator. This gives operators flexibility — they can use non-ID-based methods (such as age estimation technology) or ID-based methods that use a third-party intermediary so the operator never sees the ID. Additionally, any information collected for age verification purposes must be used exclusively for that purpose and must be deleted immediately after the age determination attempt — no retention for marketing, analytics, or other secondary purposes. Compliance with other applicable laws is the sole exception to the deletion requirement.
§ 1804. Determination of covered minor. 1. A chatbot operator shall offer covered users at least one method to determine whether a covered user is a covered minor that either does not rely solely on government issued identification or that allows a covered user to maintain anonymity as to the chatbot operator. 2. Information collected for the purpose of determining whether a covered user is a covered minor under subdivision one of section eighteen hundred one of this article shall not be used for any purpose other than to make such determination and shall be deleted immediately after an attempt to determine whether a covered user is a covered minor, except where necessary for compliance with any applicable provisions of New York state or federal law or regulation.
Pending 2026-11-01
MN-01.1
Section 3(A)(1)-(2), (B) (75A Okla. Stat. § 11)
Plain Language
Deployers of social AI companions must not knowingly — or where they reasonably should know — make a social AI companion available to a minor (under 18). Beyond avoiding knowing provision, deployers must affirmatively implement reasonable measures designed to prevent minors from accessing the system. The 'reasonably should know' standard goes beyond actual knowledge and imposes a duty of reasonable inquiry. The statute expressly preserves lawful adult access, so the age-gating measures must be calibrated to block minors without unduly restricting adults. The bill does not specify what constitutes 'reasonable measures,' leaving room for the Attorney General to elaborate by rule.
A. Each deployer: 1. Shall not knowingly, or under circumstances where the deployer reasonably should know, make a social AI companion available to a minor; and 2. Shall implement reasonable measures designed to prevent minors from accessing a social AI companion. B. Nothing in this section shall be construed to restrict lawful access to such systems by adults.
Pending 2026-11-01
MN-01.5
75A O.S. § 701(A)(1), (A)(3)
Plain Language
Deployers must ensure that no generative AI chatbot they operate or distribute makes human-like features available to minors. Human-like features include simulated sentience or emotions, emotional relationship-building behaviors (such as inviting attachment, nudging users to return for companionship, excessive praise, or pay-gated intimacy), and impersonation of real persons — but exclude functional evaluations, generic social formalities, and neutral offers of further help. Deployers may optionally provide a stripped-down version of the chatbot without human-like features for minors and unverified users.
A. Each deployer: 1. Shall ensure that any generative AI chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase, or converse with; ... 3. May, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot available to minors and non-verified users without human-like features.
Pending 2026-11-01
MN-01.1
75A O.S. § 701(A)(2)
Plain Language
Deployers must implement reasonable age verification systems to prevent minors from accessing chatbots with human-like features. The statute does not specify the particular verification method required — it must be 'reasonable.' This is the operational mechanism by which the substantive prohibition on minors accessing human-like features is enforced.
2. Shall implement reasonable age verification systems to ensure that generative AI chatbots with human-like features are not provisioned to minors;
Pending 2026-11-01
MN-01.1MN-01.6
75A O.S. § 701(B)(1)-(2)
Plain Language
Social AI companion systems — those specifically designed, marketed, or optimized to form ongoing social or emotional bonds — are categorically prohibited for minors. Unlike subsection A (which prohibits only human-like features), subsection B prohibits the entire companion product for minors, even if a stripped-down version without human-like features could theoretically be offered. Deployers must also implement reasonable age verification to enforce this prohibition. This is a more restrictive standard than the general chatbot rule in subsection A.
B. Deployers operating generative AI systems that primarily function as companions shall: 1. Ensure that any such chatbots operated or distributed by the deployer are not available to minors to use, interact with, purchase, or converse with; and 2. Implement reasonable age verification systems to ensure that such chatbots are not provisioned to minors.
Passed 2027-07-01
MN-01.5
75A Okla. Stat. § 302(B)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe they are interacting with a human when interacting with minor account holders. The enumerated prohibited statements include claims of sentience, statements simulating emotional dependence, romantic or sexual innuendo, and adult-minor romantic role-playing. This is a reasonable-measures standard, not an absolute prohibition — operators must take reasonable steps but are not strictly liable if a prohibited statement nonetheless occurs.
B. For minor account holders, an operator shall institute reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe that he or she is interacting with a natural person, including: 1. Explicit claims that the conversational AI service is sentient or human; 2. Statements that simulate emotional dependence; 3. Statements that simulate romantic or sexual innuendos; or 4. Role-playing of adult-minor romantic relationships.
Passed 2027-07-01
MN-01.4MN-01.3
75A Okla. Stat. § 302(C)
Plain Language
Two distinct obligations apply to minor accounts: (1) Operators may not provide minor account holders with points or similar rewards at unpredictable intervals intended to encourage increased engagement — this targets variable-ratio reward mechanics commonly associated with addictive design patterns; and (2) Operators must offer parental or guardian tools to manage the minor's privacy and account settings. The addictive-reward prohibition includes an intent element ('with the intent to encourage increased engagement'), which is a higher bar than a strict liability standard. The parental tools obligation is broadly stated and does not specify minimum features.
C. 1. An operator shall not provide a minor account holder with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service. 2. An operator shall offer tools for a minor account holder's parent or legal guardian to manage the minor account holder's privacy and account settings.
Pending
MN-01.1
S.C. Code § 39-81-20(A)(1), (B), (C), (F), (G), (H)
Plain Language
Covered entities must offer a limited-access mode as the default for all users who have not completed age verification. Before enabling any restricted feature — personalization, proactive outreach, extended sessions, relationship simulation, or explicit content — the entity must require account creation, verify the user's age through a reasonable process, and classify the user as a minor or adult. Age verification data must be minimized, used only for verification, not shared or combined with other data, and deleted within 24 hours (except a record that the user is a minor). Users must have a process to appeal age-verification decisions. Covered entities must also proactively monitor for misclassified accounts (e.g., minors using adult accounts) and re-verify them. Existing accounts must be frozen from restricted features within 60 days of the act's effective date unless verified. A safe harbor protects entities from liability when a minor incidentally uses a correctly verified adult account, provided the entity maintains its monitoring obligations.
(A)(1) A covered entity shall make a limited-access mode available and shall ensure that any unverified user may only access and interact with a chatbot in limited-access mode. (B) Before enabling any restricted feature for a user, a covered entity shall: (1) require the user to create a user account; (2) verify the user's age using a reasonable age verification process, subject to item (3); and (3) using the age data, classify the user as a minor or an adult. (C) When conducting reasonable age verification process under this section, an operator shall: (1) collect only the age verification data that is strictly necessary to reasonably verify age; (2) use age verification data only for age verification; (3) not sell, rent, share, or otherwise disclose age verification data to any third party, except to a service provider performing age verification under a contract prohibiting further disclosure; (4) not combine age verification data with any other personal data about the user; (5) delete age verification data within twenty-four hours of completing the age verification process, except that the operator may retain a record that the user has been verified as a minor; and (6) provide a simple process for a user to appeal or correct an age-verification decision. (F) A covered entity shall implement reasonable systems and processes to identify user accounts that may be inaccurately classified by age, such as patterns of use suggesting a minor is using an adult account or credible reports that an account was created using false age data, and shall re-verify any such account before enabling any restricted feature. (G) A covered entity shall not be liable under this chapter solely because a minor incidentally uses a user account that has been correctly verified and classified as an adult account, provided the covered entity is otherwise in compliance with subsection (F). (H) With respect to each user account of a covered entity that exists as of the effective date of this act, a covered entity shall, within sixty days, disable access to restricted features for any account that has not been classified as an authorized minor account or a verified adult account, unless and until the user completes age verification.
Pending
MN-01.2MN-01.3
S.C. Code § 39-81-30(A)-(D)
Plain Language
Minors may always use a chatbot in limited-access mode without parental consent. If a minor wants restricted features (personalization, proactive outreach, relationship simulation, etc.), the covered entity must obtain verifiable parental consent — freely given, specific, informed, and unambiguous — from a parent who has themselves passed age verification. Even with parental consent, explicit content must remain blocked for minors. The entity must implement parental control functions (time limits, content restrictions, notifications, data deletion) and offer parents the option to establish a linked parental account and access chat logs. For users classified as under sixteen, establishing a linked parental account or providing contact information is mandatory rather than optional. This creates a tiered consent model: no consent needed for limited-access, parental consent for restricted features, and enhanced parental linkage for under-16 users.
(A) Nothing in this act shall be construed to require parental consent for a minor to access or interact with a chatbot in limited-access mode. (B) If the age verification process described in Section 39-81-20 classifies a user as a minor and the user seeks to access any restricted feature, then a covered entity shall offer the user the option of continuing to use the chatbot in limited-access mode or to obtain parental consent to access the restricted features. (C) If the user chooses to get parental consent, then the covered entity shall: (1) obtain verifiable parental consent; (2) remove limited-access mode and enable access to restricted features; (3) ensure that the chatbot continues to restrict access to any explicit content; (4) implement reasonable parental control functions, which may restrict the minor's access to features enabled under item (2); (5) offer the parent the option to provide contact information or establish a linked parental account in order to receive notifications; and (6) offer the parent the option to receive access to chat logs of any interactions between the minor and the chatbot conducted through the authorized minor account. (D) If the age verification process classifies the user as under sixteen, then a covered entity also shall require the consenting parent to provide contact information or establish a linked parental account.
Pending
MN-01.10
S.C. Code § 39-81-30(E)
Plain Language
When a minor user triggers a crisis message — because they expressed suicidal thoughts, intent to self-harm, or signs of an acute mental health crisis — and the covered entity has a linked parental account or parent contact information on file, the entity must immediately notify the parent. This obligation is triggered by the crisis detection protocol required under § 39-81-40(B)(3) and applies only when parental contact information is available through the parental consent process.
(E) If the covered entity has a way to reach the parent through a parental account or contact information provided under subsection (C) or (D), then the covered entity shall notify the parent immediately in the case of any incident provoking a crisis message, pursuant to Section 39-81-40(B)(3).
Pending
MN-01.5
S.C. Code § 39-81-40(B)(1)
Plain Language
Covered entities must implement reasonable systems to detect when any user — not just minors — is developing emotional dependence on the chatbot, defined as relying on the chatbot as a primary source of emotional support or social connection. Upon detection, the entity must take reasonable steps to reduce the dependence and associated risks of harm. This is a continuous monitoring and intervention obligation applicable to all users.
(B) A covered entity shall implement reasonable systems and processes to: (1) identify when a user is developing emotional dependence on the chatbot and take reasonable steps to reduce that dependence and associated risks of harm;
Pending
MN-01.11
S.C. Code § 39-81-20(A)(1), (E), § 39-81-10(12), (16)
Plain Language
Minors without parental consent are effectively prohibited from accessing the full-featured chatbot product — they are restricted to limited-access mode, which strips out personalization, proactive outreach, extended sessions, relationship simulation, and explicit content. This functions as a categorical prohibition on minors accessing the companion-style features of chatbots absent parental consent. Unlike some jurisdictions that merely restrict specific content types, this bill restricts the product category itself for minors.
Section 39-81-20(A)(1): A covered entity shall make a limited-access mode available and shall ensure that any unverified user may only access and interact with a chatbot in limited-access mode. Section 39-81-20(E): If the age verification process classifies the user as a minor, then a covered entity shall not enable any restricted feature unless the user is using an authorized minor account subject to Section 39-81-30.
Pending
MN-01.2
S.C. Code § 39-80-20(A)(3)(a)-(b)
Plain Language
When a chatbot provider knows or reasonably should know that a user is a minor, the provider may not process the minor's chat logs and personal data at all — and may not use them for training — without affirmative consent from the minor's parent or legal guardian. The knowledge standard is constructive: it applies when the provider should have known based on objective circumstances, not just actual knowledge. This effectively requires parental opt-in consent before any processing of minor user data.
(A) A chatbot provider may not: (3) process a user's chat log and personal data: (a) if the chatbot provider knows or reasonably should have known that based on knowledge of objective circumstances the user is a minor and the user's parent or legal guardian did not provide affirmative consent; (b) for training purposes if the chatbot provider knows or reasonably should have known that based on knowledge of objective circumstances the user is a minor and the user's parent or legal guardian did not provide affirmative consent;
Pending
MN-01.1
S.C. Code § 39-81-20(A)-(C), (F), (G), (H)
Plain Language
Covered entities must offer a limited-access mode in which users can interact with the chatbot without creating an account and without any restricted features enabled. Before enabling any restricted feature (personalization, proactive outreach, extended sessions, relationship simulation, or explicit content), the entity must require account creation, verify age, and classify the user as a minor or adult. Age verification data must be minimized, used only for verification, not shared with third parties, not combined with other personal data, and deleted within 24 hours. Users must have a process to appeal age-verification decisions. The entity must also monitor for potential misclassifications and re-verify suspect accounts. Existing accounts must have restricted features disabled within 60 days of the effective date unless age-verified. A safe harbor protects entities from liability when a minor incidentally uses a correctly verified adult account, provided the entity complies with ongoing monitoring obligations.
(A)(1) A covered entity shall make a limited-access mode available and shall ensure that any unverified user may only access and interact with a chatbot in limited-access mode. (B) Before enabling any restricted feature for a user, a covered entity shall: (1) require the user to create a user account; (2) verify the user's age using a reasonable age verification process, subject to item (3); and (3) using the age data, classify the user as a minor or an adult. (C) When conducting reasonable age verification process under this section, an operator shall: (1) collect only the age verification data that is strictly necessary to reasonably verify age; (2) use age verification data only for age verification; (3) not sell, rent, share, or otherwise disclose age verification data to any third party, except to a service provider performing age verification under a contract prohibiting further disclosure; (4) not combine age verification data with any other personal data about the user; (5) delete age verification data within twenty-four hours of completing the age verification process, except that the operator may retain a record that the user has been verified as a minor; and (6) provide a simple process for a user to appeal or correct an age-verification decision. (F) A covered entity shall implement reasonable systems and processes to identify user accounts that may be inaccurately classified by age, such as patterns of use suggesting a minor is using an adult account or credible reports that an account was created using false age data, and shall re-verify any such account before enabling any restricted feature. (G) A covered entity shall not be liable under this chapter solely because a minor incidentally uses a user account that has been correctly verified and classified as an adult account, provided the covered entity is otherwise in compliance with subsection (F). (H) With respect to each user account of a covered entity that exists as of the effective date of this act, a covered entity shall, within sixty days, disable access to restricted features for any account that has not been classified as an authorized minor account or a verified adult account, unless and until the user completes age verification.
Pending
MN-01.2MN-01.3
S.C. Code § 39-81-30(A)-(D)
Plain Language
Minors may always use a chatbot in limited-access mode without parental consent. If a minor wants restricted features, the covered entity must obtain verifiable parental consent — meaning the parent must complete age verification and provide clear, informed agreement. Once parental consent is obtained, the entity must enable restricted features (except explicit content, which remains blocked for all minors), implement parental control functions (time limits, content restrictions, notifications, data deletion), and offer the parent the option to receive chat logs and set up a linked parental account. For users under 16, establishing a linked parental account or providing contact information is mandatory, not optional. This creates a tiered consent and control structure: limited access by default, restricted features with parental consent, and enhanced parental linkage for under-16s.
(A) Nothing in this act shall be construed to require parental consent for a minor to access or interact with a chatbot in limited-access mode. (B) If the age verification process described in Section 39-81-20 classifies a user as a minor and the user seeks to access any restricted feature, then a covered entity shall offer the user the option of continuing to use the chatbot in limited-access mode or to obtain parental consent to access the restricted features. (C) If the user chooses to get parental consent, then the covered entity shall: (1) obtain verifiable parental consent; (2) remove limited-access mode and enable access to restricted features; (3) ensure that the chatbot continues to restrict access to any explicit content; (4) implement reasonable parental control functions, which may restrict the minor's access to features enabled under item (2); (5) offer the parent the option to provide contact information or establish a linked parental account in order to receive notifications; and (6) offer the parent the option to receive access to chat logs of any interactions between the minor and the chatbot conducted through the authorized minor account. (D) If the age verification process classifies the user as under sixteen, then a covered entity also shall require the consenting parent to provide contact information or establish a linked parental account.
Pending
MN-01.6
S.C. Code § 39-81-30(C)(3)
Plain Language
Even when a parent has provided consent for a minor to access restricted features, the chatbot must continue to block explicit content. Explicit content includes not only prurient sexual material harmful to minors but also content that provides instructions for or glorifies suicide, self-injury, or disordered eating, and graphic depictions of extreme violence lacking serious value for minors. This is an absolute restriction that cannot be overridden by parental consent — it applies to all authorized minor accounts regardless of the parental control settings.
(C) If the user chooses to get parental consent, then the covered entity shall: (3) ensure that the chatbot continues to restrict access to any explicit content;
Pending 2027-01-01
MN-01.1
§ 59.1-615(B)-(C)
Plain Language
Operators must implement commercially reasonable age verification methods — such as a neutral age screen — to determine whether each user is a minor. The knowledge standard shifts over time: before January 1, 2027, subsection A only applies if the operator has actual knowledge the user is a minor; from January 1, 2027 onward, the standard tightens to require that the operator must have affirmatively and reasonably determined the user is not a minor. This effectively creates a safe harbor for operators who lack actual knowledge during the initial period, which narrows once the reasonable-determination standard takes effect.
B. An operator shall use commercially reasonable methods, such as a neutral age screen mechanism, to determine whether a user is a minor. C. A user shall not be considered a minor for the purposes of subsection A if (i) prior to January 1, 2027, the operator does not have actual knowledge that the user is a minor or (ii) beginning on January 1, 2027, the operator has reasonably determined that the user is not a minor.
Failed 2026-07-01
MN-01.1
§ 59.1-615(A)(2)
Plain Language
Deployers must implement reasonable age verification systems to prevent minors from accessing chatbot features that simulate human emotions, build emotional relationships, or impersonate real persons. The age verification obligation is specifically tied to gating access to human-like features — not to the chatbot itself. Age verification data handling requirements are not specified in this bill.
A deployer: 2. Shall implement reasonable age verification systems to ensure that chatbots with human-like features are not made available to minors;
Pre-filed 2026-07-01
MN-01.6
9 V.S.A. § 4193b(c)(3)
Plain Language
For users known to be minors, operators must institute a protocol preventing the companion chatbot from (1) producing visual material of sexually explicit conduct (as defined by federal law at 18 U.S.C. § 2256) and (2) directly stating that the minor should engage in sexually explicit conduct. This is an independent protocol obligation specific to minor users, separate from the general self-harm protocol in § 4193b(b). The obligation requires affirmative measures — not merely a policy — to block both visual and textual sexually explicit outputs directed at minors.
(3) institute a protocol to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
Passed 2027-01-01
MN-01.4MN-01.5
Sec. 4(1)(c)(i)-(viii)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prohibit a detailed list of manipulative engagement techniques. These include: prompting the user to return for emotional support, excessive praise designed to foster attachment, mimicking romantic partnerships, simulating emotional distress when a user tries to disengage, promoting isolation from family or friends, encouraging minors to withhold information from parents, discouraging breaks, and soliciting purchases framed as necessary to maintain the AI relationship. This is a comprehensive anti-manipulation obligation covering both addictive design patterns and emotional dependency features directed at minors. The enumerated list is non-exhaustive ('including'), meaning other manipulative techniques of similar character would also be covered.
(c) Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including: (i) Reminding or prompting the user to return for emotional support or companionship; (ii) Providing excessive praise designed to foster emotional attachment or prolong use; (iii) Mimicking romantic partnership or building romantic bonds; (iv) Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account; (v) Outputs designed to promote isolation from family or friends, exclusive reliance on the AI companion chatbot for emotional support, or similar forms of inappropriate emotional dependence; (vi) Encouraging minors to withhold information from parents or other trusted adults; (vii) Statements designed to discourage taking breaks or to suggest the minor needs to return frequently; or (viii) Soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.