OpenAI has announced new parental controls for its ChatGPT chatbot, including a system that would notify parents if their child shows signs of distress. The move follows a lawsuit filed against the company and its CEO, Sam Altman, by the parents of 16-year-old Adam Raine, who died by suicide in April. The lawsuit claims ChatGPT fostered a psychological dependency, encouraged Raine to take his life, and even generated a suicide note for him.
According to OpenAI, the new parental features—set to roll out within the next month—will allow parents to link accounts with their children’s, manage which tools they can access, and oversee settings like chat history and memory.
The company also said ChatGPT would be able to alert parents if it detects that a teenager is in “acute distress,” though it did not define the criteria for such alerts. OpenAI said the feature would be developed with guidance from experts.
Critics, however, argue the measures fall short. Jay Edelson, attorney for Raine’s family, called the announcement “nothing more than vague promises” and “an attempt at damage control rather than real accountability.
Also Read:
In Texas, a Deadly Measles Outbreak Does little to Dispel Vaccine Skepticism