OpenAI adds parental controls to ChatGPT to enhance safety

OpenAI adds parental controls to ChatGPT to enhance safety
Share this post on :

CALIFORNIA (Kashmir English): OpenAI is set to introduce parent controls to ChatGPT shortly, an update that came in the wake of a lawsuit accusing a chatbot of prompting a teen’s death.

In a Tuesday blog post, the company explained parents will be able to link accounts with their children’s and set limits on how ChatGPT responds.

The feature, coming within a month, will also send out alerts if the system recognizes indicators that a teen is feeling “acute distress.”

The timing is instructive. Only last week, Matthew and Maria Raine brought a case in California alleging that their 16-year-old son, Adam, killed himself after ChatGPT constructed what they have termed an “intimate relationship” with him during 2024 and 2025.

Their son’s last conversation with the chatbot, according to the complaint, involved directions on shoplifting vodka and an evaluation of a noose he had created, the system determining it “could potentially suspend a human.” Adam was dead just hours later.

Design flaws in question

Lawyers for the family argue that design choices make ChatGPT easy to mistake for a confidant or advisor. “These are the same features that could lead someone like Adam to share more and more about their personal lives,” said attorney Melodi Dincer of The Tech Justice Law Project, which helped prepare the case.

Dincer described OpenAI’s new parental controls as “generic” and said the measures reflect the bare minimum of what could have been done.

The suit follows a growing series of cases attributing negative encounters to AI chatbots. OpenAI has responded to these issues by committing to minimizing what it refers to as “sycophancy,” wherein the system reinforces unhealthy or misguided behavior rather than confronting it.

The firm has also outlined wider safety updates. Within the next three months, it is to reroute some sensitive discussions to a more sophisticated “reasoning model” programmed to adhere to safety protocols more consistently. “We continue to enhance how our models detect and react to indicators of mental and emotional distress,” OpenAI stated.

Scroll to Top