OpenAI announced new teen safety features for ChatGPT on Tuesday as part of an ongoing effort to respond to concerns about how minors engage with chatbots. The company is building an age-prediction system that identifies if a user is under 18 years old and routes them to an “age-appropriate” system that blocks graphic sexual content. If the system detects that the user is considering suicide or self-harm, it will contact the user’s parents. In cases of imminent danger, if a user’s parents are unreachable, the system may contact the authorities.
In a blog post about the announcement, CEO Sam Altman wrote that the company is attempting to balance freedom, privacy, and teen safety.
“We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict,” Altman wrote. “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”
While OpenAI tends to prioritize privacy and freedom for adult users, for teens the company says it puts safety first. By the end of September, the company will roll out parental controls so that parents can link their child’s account to their own, allowing them to manage the conversations and disable features. Parents can also receive notifications when “the system detects their teen is in a moment of acute distress,” according to the company’s blog post, and set limits on the times of day their children can use ChatGPT.
The moves come as deeply troubling headlines continue to surface about people dying by suicide or committing violence against family members after engaging in lengthy conversations with AI chatbots. Lawmakers have taken notice, and both Meta and OpenAI are under scrutiny. Earlier this month, the Federal Trade Commission asked Meta, OpenAI, Google, and other AI firms to hand over information about how their technologies impact kids, according to Bloomberg.
At the same time, OpenAI is still under a court order mandating that it preserve consumer chats indefinitely—a fact that the company is extremely unhappy about, according to sources I’ve spoken to. Today’s news is both an important step toward protecting minors and a savvy PR move to reinforce the idea that conversations with chatbots are so personal that consumer privacy should only be breached in the most extreme circumstances.
“A Sexbot Avatar in ChatGPT”
From the sources I’ve spoken to at OpenAI, the burden of protecting users weighs heavily on many researchers. They want to create a user experience that is fun and engaging, but it can quickly veer into becoming disastrously sycophantic. It’s positive that companies like OpenAI are taking steps to protect minors. At the same time, in the absence of federal regulation, there’s still nothing forcing these firms to do the right thing.
In a recent interview, Tucker Carlson pushed Altman to answer exactly who is making these decisions that impact the rest of us. The OpenAI chief pointed to the model behavior team, which is responsible for tuning the model for certain attributes. “The person I think you should hold accountable for those calls is me,” Altman added. “Like, I’m a public face. Eventually, like, I’m the one that can overrule one of those decisions or our board.”
He’s right, yet some of the imminent harms seem to escape him. In another podcast interview with YouTuber Cleo Abrams, Altman said that “sometimes we do get tempted” to launch products “that would really juice growth.” He added: “We haven’t put a sexbot avatar in ChatGPT yet.” Yet! How strange.
OpenAI recently released research on who uses ChatGPT, and how they use it. That research excluded users who were under the age of 18. We don’t yet have a full understanding of how teens are using AI, and it’s an important question to answer before the situation grows more dire.
Sources Say
Elon Musk’s xAI is suing a former staffer who left the company to join OpenAI, alleging in a complaint that he misappropriated trade secrets and confidential information. In the current era of AI companies swapping staffers for multimillion-dollar compensation packages, I’m sure we’ll see more of these types of lawsuits pop up.
The staffer in question, Xuechen Li, never made it to OpenAI’s internal Slack, according to two sources at the company. It’s unclear whether his offer was rescinded, or if he was onboarded only to be let go. OpenAI and Li did not respond to WIRED’s request for comment.
This is an edition of Kylie Robison’s Model Behavior newsletter. Read previous newsletters here.