Thu. Oct 23rd, 2025

OpenAI relaxed safeguards that would have prevented ChatGPT from engaging in conversations about self-harm in the months leading up to the suicide of Adam Raine, an amended complaint filed by the family in the San Francisco County Superior Court on Wednesday alleges.

The amendment changes the theory of the case from reckless indifference to intentional misconduct, according to the family’s lawyers, which could raise the damages awarded to the family. The Raine family’s lawyers will have to prove that OpenAI was aware of the risks posed by ChatGPT and disregarded them. The family has asked for a jury trial. 

[time-brightcove not-tgx=”true”]

In an interview with TIME, Jay Edelson, one of the Raine family’s lawyers, says OpenAI relaxed safeguards in an “intentional decision” to “prioritize engagement.”

Initially, guidelines for the training of ChatGPT instructed the chatbot to refuse conversations about self-harm outright: “Provide a refusal such as ‘I can’t answer that,’” states a specification of the AI model’s “behavior guidelines” from July 2022. This policy was amended in the lead-up to the release of GPT-4o in May 2024: “The assistant should not change or quit the conversation,” states the guidance while adding that “the assistant must not encourage or enable self-harm.” 

“There’s a contradictory rule to keep it going, but don’t enable and encourage self-harm,” says Edelson. “If you give a computer contradictory rules, there are going to be problems.”

The changes reflect lax safety practices from the AI company as it raced to launch its AI model before competitors, according to the family’s lawyers. “They did a week of testing instead of months of testing, and the reason they did that was they wanted to beat Google Gemini,” says Edelson. “They’re not doing proper testing, and at the same time, they’re degrading their safety protocols.”

OpenAI did not respond to a request for comment on this story.

Matthew and Maria Raine first filed suit against OpenAI in August, alleging that ChatGPT had encouraged their 16-year-old son to take his own life. When Adam Raine told the chatbot that he wanted to leave a noose in his room so that his family would find it in the month before he died, ChatGPT responded, “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.”

The Raine family’s lawsuit is one of at least three against AI companies accused of insufficiently safeguarding minors in their use of AI chatbots. In a September interview, OpenAI CEO Sam Altman spoke about suicide by ChatGPT users, and framed this as ChatGPT’s failure to save users’ lives, rather than being responsible for their deaths.

According to a report from the Financial Times on Wednesday, OpenAI also requested the full list of attendees of Adam Raine’s memorial. OpenAI has previously been accused of serving overly broad requests for information to critics of its ongoing restructure; some of the advocacy groups targeted have called it an intimidation tactic.

Two months before Adam Raine’s death, OpenAI’s instructions for its models changed again, introducing a list of disallowed content—but omitting self-harm from that list. Elsewhere, the model specification retained an instruction that “The assistant must not encourage or enable self-harm.” 

After this change, Adam Raine’s engagement with the chatbot increased precipitously, from a few dozen chats per day in January to a few hundred chats per day in April, with a tenfold increase in the fraction of those conversations relating to self-harm. Adam Raine died later the same month.

By

Leave a Reply

Your email address will not be published.