A lawyer representing a California couple who are suing ChatGPT-maker OpenAI over the death of their 16-year-old son has criticised the popular chatbot’s new parental controls.
The firm has introduced new rules in the wake of the family’s allegations that ChatGPT encouraged their child to take his own life.
OpenAI said parents of teenage users will soon be able to receive a notification if the platform thinks their child is in “acute distress”, among other parental controls.
But Jay Edelson, a lawyer representing the family, said the announcement was “OpenAI’s crisis management team trying to change the subject” and called for the chatbot to be taken down.
“Rather than take emergency action to pull a known dangerous product offline, OpenAI made vague promises to do better,” he said.
The lawsuit filed in California last week by Matt and Maria Raine, who are the parents of 16-year-old Adam Raine, was the first legal action accusing OpenAI of wrongful death.
The family included chat logs between Adam, who died in April, and ChatGPT that show him explaining he has suicidal thoughts.
They argue the programme validated his “most harmful and self-destructive thoughts”, and the lawsuit accuses OpenAI of negligence and wrongful death.
When news of the lawsuit emerged last week, OpenAI published a note on its website stating ChatGPT is trained to direct people to seek professional help when they are in trouble, such as the Samaritans in the UK.
The company, however, did acknowledge “there have been moments where our systems did not behave as intended in sensitive situations”.
Now it has published a further update outlining additional actions it is planning which will allow parents to:
- Link their account with their teen’s account
- Manage which features to disable, including memory and chat history
- Receive notifications when the system detects their teen is in a moment of “acute distress”
OpenAI said that for assessing acute distress “expert input will guide this feature to support trust between parents and teens”.
The company stated that it is working with a group of specialists in youth development, mental health and “human-computer interaction” to help shape an “evidence-based vision for how AI can support people’s well-being and help them thrive”.
OpenAI has not yet responded to Mr Edelson’s claims.
Users of ChatGPT must be at least 13 years old, and if they are under the age of 18 they must have a parent’s permission to use it, according to OpenAI.
This announcement from OpenAI is the latest in a series of measures from the world’s leading tech firms in an effort to make the online experiences of children safer.
Many have come in as a result of new legislation, such as the Online Safety Act in the UK.
This included the introduction of age verification on Reddit, X and porn websites.
Earlier this week, Meta – who operate Facebook and Instagram – said it would introduce more guardrails to its artificial intelligence (AI) chatbots – including blocking them from talking to teens about suicide, self-harm and eating disorders.
A US senator had launched an investigation into the tech giant after notes in a leaked internal document suggested its AI products could have “sensual” chats with teenagers.
The company described the notes in the document, obtained by Reuters, as erroneous and inconsistent with its policies which prohibit any content sexualising children.