OpenAI Adds Parental Safety Controls for Teen ChatGPT Users. Here’s What to Expect

openai-adds-parental-safety-controls-for-teen-chatgpt-users.-here’s-what-to-expect

Starting today, OpenAI is rolling out ChatGPT safety tools intended for parents to use with their teenagers. This worldwide update includes the ability for parents, as well as law enforcement, to receive notifications if a child—in this case, users between the ages of 13 and 18—engages in chatbot conversations about self harm or suicide.

These changes arrive as OpenAI is being sued by parents who allege ChatGPT played a role in the death of their child. The chatbot allegedly encouraged the suicidal teen to hide a noose in their room out of sight from family members, according to reporting from The New York Times.

As a whole, the content experience for teens using ChatGPT is altered with this update. “Once parents and teens connect their accounts, the teen account will automatically get additional content protections,” reads OpenAI’s blog post announcing the launch. “Including reduced graphic content, viral challenges, sexual, romantic or violent roleplay, and extreme beauty ideals, to help keep their experience age-appropriate.”

Under the new restrictions, if a teen using a ChatGPT account enters a prompt related to self-harm or suicidal ideation, the prompt is sent to a team of human reviewers who decide whether to trigger a potential parental notification.

“We will contact you as a parent in every way we can,” says Lauren Haber Jonas, OpenAI’s head of youth well-being. Parents can opt to receive these alerts over text, email, and a notification from the ChatGPT app.

The warnings parents may receive in these situations are expected to arrive within hours of the conversation being flagged for review. In moments where every minute counts, this delay will likely be frustrating for parents who want more instant alerts about their child’s safety. OpenAI is working to reduce the lag time for notifications.

The alert that could potentially be sent to parents by OpenAI will broadly state that the child may have written a prompt related to suicide or self harm. It may also include conversation strategies from mental health experts for the parents to use while talking with their child.

In a prelaunch demo, the example email’s subject line shown to WIRED highlighted safety concerns but did not explicitly mention suicide. What the parental notifications also won’t include are any direct quotes from the child’s conversation—neither the prompts nor the outputs. Parents can follow up with the notification and request conversation time stamps.

“We want to give parents enough information to take action and have a conversation with their teens while still maintaining some amount of teen privacy,” says Jonas, “because the content can also include other sensitive information.”

Both the parent’s and the teen’s accounts have to be opted-in for these safety features to be activated. This means parents will need to send their teen an invitation to have their account monitored, and the teen is required to accept it. The account linkage can also be initiated by the teen.

OpenAI may contact law enforcement in situations where human moderators determine that a teen may be in danger and the parents are unable to be reached via notification. It’s unclear what this coordination with law enforcement will look like, especially on a global scale.

Another major ChatGPT update rolling out as part of these changes is the new ability for parents to set time restrictions for their teens in ChatGPT. Rather than a certain amount of usage hours, however, ChatGPT access can be blocked during specific time periods, like from 8 pm to 10 am.

Altogether, the parental controls include multiple granular choices for guardians to make about their child’s ChatGPT experience. In addition to the restriction of sensitive content and quiet hours, parents can opt their kid’s data out of model training, turn off the bot’s saved memories, and disable voice mode as well as image generation.

“Guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them,” reads OpenAI’s blog. WIRED’s past testing of OpenAI’s GPT-5 model showed that certain aspects of potential guardrails could be circumvented with basic commands in the custom instructions.

All of these updates also come after the death of another teen who was using a chatbot from Character.ai, a role-playing platform popular with young users. Following the death, Character.ai implemented parental controls to allow more visibility into teenage usage patterns. The visibility only goes so far; Character.ai’s parental notifications do not include content-specific alerts, like those for suicidal ideation.

Other AI companies are likely to mimic OpenAI’s approach to these types of teen safety tools. “We want users to be able to use our tools in the way that they want, within very broad bounds of safety,” wrote CEO Sam Altman in a blog post originally announcing the tools two weeks ago. Even if these attempts to protect teens prove successful, it’s highly unlikely the same protections around suicidal ideation will be rolled out for adults due to OpenAI’s concerns about user privacy.

If you or someone you know needs help, call 988 for free, 24-hour support from the National Suicide Prevention Lifeline. Outside the US, visit the International Association for Suicide Prevention for crisis centers around the world.

Related Posts

Leave a Reply