OpenAI is set to begin rolling out “parental controls” for its AI chatbot ChatGPT within the next month, amid growing concern over how the chatbot behaves in mental health contexts, specifically with youth users.
The company, which announced the new feature in a blog post on Tuesday, said it is improving how its “models recognize and respond to signs of mental and emotional distress.”
[time-brightcove not-tgx=”true”]
OpenAI is due to introduce a new feature that allows parents to link their account to that of their child through an email invitation. Parents will also be able to control how the chatbot responds to prompts and will receive an alert if the chatbot detects that their child is in a “moment of acute distress,” the company said. Additionally, the rollout should enable parents to “manage which features to disable, including memory and chat history.”
OpenAI previously announced that it was considering allowing teens to add a trusted emergency contact to their account. But the company did not outline concrete plans to add such a measure in its most recent blog post.
“These steps are only the beginning. We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible,” the company said.
This announcement comes a week after the parents of teenage boy who died by suicide sued OpenAI, alleging its ChatGPT helped their son Adam “explore suicide methods.” TIME reached out to OpenAI for comment regarding the lawsuit. (OpenAI did not explicitly reference the legal challenge in its announcement regarding parental controls.)
“ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts,” the lawsuit argued. “ChatGPT pulled Adam deeper into a dark and hopeless place by assuring him that ‘many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control.’”
Read More: Parents Allege ChatGPT Is Responsible for Their Teenage Son’s Death by Suicide
At least one parent has filed a similar lawsuit against a different artificial intelligence company, Character.AI, alleging the company’s chatbot companions encouraged their 14-year-old son’s death by suicide.
Responding to the lawsuit last year, a spokesperson for Character.AI said the company was “heartbroken by the tragic loss” of one of its users and expressed their “deepest condolences” to the family.
“As a company, we take the safety of our users very seriously,” the spokesperson said, adding that the company had been implementing new safety measures.
Character.AI now has a parental insights feature that allows parents to see a summary of their child’s activity on the platform if their teen sends them an email invitation.
Other companies with AI chatbots, such as Google AI, have existing parental controls. “As a parent, you can manage your child’s Gemini settings, including turning it on or off, with Google Family Link,” reads advice from Google to parents wishing to manage their child’s access to Gemini Apps. Meta recently announced that it would bar its chatbots from engaging in conversations about suicide, self-harm, and disordered eating, after Reuters reported on an internal policy document with concerning information.
A recent study published in the medical journal Psychiatric Services testing the responses of three chatbots—OpenAI’s ChatGPT, Google AI’s Gemini, and Anthropic’s Claude—found that some of them responded to what researchers referred to as questions with “intermediate levels of risk” related to suicide.
OpenAI has some existing protections in place. The California-based company stated that its chatbot shares crisis helplines and refers users to real-world resources in a statement to the New York Times in response to the lawsuit filed in late August. But they also noted some flaws in the system. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the company stated.
In its post announcing the upcoming roll out of parental controls, OpenAI also shared plans to route sensitive inquiries to a model of their chatbot that spends a longer time reasoning and looking through context before responding to prompts.
OpenAI has said it will continue sharing its progress over the next 120 days and is collaborating with a group of experts who specialize in youth development, mental health, and human-computer interaction to better inform and shape the ways that AI can respond during times of need.
If you or someone you know may be experiencing a mental-health crisis or contemplating suicide, call or text 988. In emergencies, call 911, or seek care from a local hospital or mental health provider.