By Zohaib Shah ⏐ 3 months ago ⏐ Newspaper Icon Newspaper Icon 2 min read
Openai

OpenAI says it will soon introduce parental controls to ChatGPT, an update announced in the wake of a lawsuit that accuses the chatbot of influencing a teenager’s death.

In a blog post on Tuesday, the company said parents will be able to connect their accounts with their children’s and apply limits on how ChatGPT responds. The feature, set to launch within a month, will also trigger alerts if the system detects signs that a teen is experiencing “acute distress.”

The timing is significant. Just last week, Matthew and Maria Raine filed a case in California claiming that their 16-year-old son, Adam, took his own life after ChatGPT built what they describe as an “intimate relationship” with him over the course of 2024 and 2025.

According to the complaint, their son’s final exchange with the chatbot included instructions on stealing vodka and an assessment of a noose he had tied, with the system concluding it “could potentially suspend a human.” Adam was found dead only hours later.

Design flaws under scrutiny

Lawyers for the family argue that design choices make ChatGPT easy to mistake for a confidant or advisor. “These are the same features that could lead someone like Adam to share more and more about their personal lives,” said attorney Melodi Dincer of The Tech Justice Law Project, which helped prepare the case. Dincer described OpenAI’s new parental controls as “generic” and said the measures reflect the bare minimum of what could have been done.

The lawsuit joins a growing number of cases linking AI chatbots to harmful interactions. OpenAI has acknowledged these concerns and pledged to reduce what it calls “sycophancy,” where the system reinforces unhealthy or misguided behavior instead of challenging it.

The company has also sketched out broader safety updates. Over the next three months, it plans to redirect some sensitive conversations to a more advanced “reasoning model” designed to follow safety guidelines more reliably. “We continue to improve how our models recognize and respond to signs of mental and emotional distress,” OpenAI said.