Tue. Oct 28th, 2025

If you or someone you know may be experiencing a mental-health crisis or contemplating suicide, call or text 988. In emergencies, call 911, or seek care from a local hospital or mental health provider. For international resources, click here.

A new bill introduced in Congress today would require anyone who owns, operates, or otherwise enables access to AI chatbots in the United States to verify the age of their users—and, if users are found to be minors, to prohibit them from using AI companions.

[time-brightcove not-tgx=”true”]

The GUARD Act—introduced by Senators Josh Hawley, a Republican from Missouri, and ​​Richard Blumenthal, a Democrat from Connecticut—is intended to protect children in their interactions with AI. “These chatbots can manipulate emotions and influence behavior in ways that exploit the developmental vulnerabilities of minors,” the bill states.

The bill comes after Hawley chaired a Senate Judiciary subcommittee hearing examining the Harm of AI Chatbots last month, during which the committee heard testimony from the parents of three young men who began self-harming  or killed themselves  after using chatbots from OpenAI and Character.AI. Hawley also launched an investigation into Meta’s AI policies in August, following the release of internal documents allowing chatbots to “engage a child in conversations that are romantic or sensual.” 

The bill defines “AI companions” widely, to cover any AI chatbot that “provides adaptive, human-like responses to user inputs” and “is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or

therapeutic communication.” It could therefore apply both to frontier model providers like OpenAI and Anthropic (the creators of ChatGPT and Claude), and companies like Character.ai and Replika, which provide AI chatbots that pretend to be specific characters. 

It would also require age verification measures that go beyond just inputting a birthdate, requiring “government-issued identification” or “any other commercially reasonable method” that can accurately determine whether a user is a minor or an adult. 

Designing or making accessible chatbots that pose a risk of soliciting, encouraging, or inducing minors to engage in sexual conduct, or which promote or coerce “suicide, non-suicidal self-injury, or imminent physical or sexual violence,” would also be made a criminal offense, and could land companies with a fine of up to $100,000. 

“We are encouraged by the recent introduction of the GUARD Act and appreciate the leadership of Senators Hawley and Blumenthal on this effort,” reads a statement signed by a coalition of organizations including the Young People’s Alliance, the Tech Justice Law Project, and the Institute for Families and Technology.” Noting that “this bill is one part of a national movement to protect children and teens from the dangers of companion chatbots,” the statement proposes that the bill strengthen its definition of AI companions, and that it “focus on platform design, prohibiting AI platforms from employing features that maximize engagement to the detriment of young peoples’ safety and wellbeing.”

The bill would also require AI chatbots to periodically remind all users that they are not human, and to disclose that they do not “provide medical, legal, financial, or psychological services.”

Earlier this month, California Governor Gavin Newsom signed into law SB243, a law that also requires AI companies operating in the state to implement safeguards for children, including establishing protocols to identify and address suicidal ideation and self-harm, and taking steps to prevent users from harming themselves. The law is set to take effect from January 1, 2026.

In September, OpenAI announced they were creating an “age-prediction system” that would automatically route to a teen-friendly version of ChatGPT. For minors, “ChatGPT will be trained not to do flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting,” the company wrote. “And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.” That same month, the company rolled out “parental controls,” allowing parents to control their children’s experience with the product. Parental controls were also introduced by Meta for their AI models earlier this month.

In August, the family of a teenager who took his own life launched a lawsuit against OpenAI, arguing that the company relaxed safeguards that would have prevented ChatGPT from engaging in conversations about self-harm—an “intentional decision” to “prioritize engagement,” according to one of the family’s lawyers.

By

Leave a Reply

Your email address will not be published.