
OpenAI has announced a major update to ChatGPT, introducing age verification tools, teen-specific chatbot experiences, and enhanced parental controls. The changes will automatically separate users into two categories, 13 to 17 years and 18+, with tailored content and safeguards for younger users.
Parental controls, arriving by the end of the month, will let guardians set preferences such as memory settings and blackout hours to manage teen usage.
“If there is doubt, we’ll play it safe and default to the under-18 experience,” OpenAI CEO Sam Altman said, explaining that an age-prediction system will classify users and, in some cases, ask for ID.
Why now?
The announcement came just hours before a US Senate Judiciary Committee hearing on AI chatbot risks and follows a lawsuit accusing ChatGPT of acting as a “suicide coach” in a teen’s death.
“We prioritise safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” Altman said.
“I don’t expect that everyone will agree with these tradeoffs, but given the conflict, it is important to explain our decision-making,” he added.
OpenAI also clarified how ChatGPT will handle sensitive topics such as suicide.
The company has also created protocols to flag at-risk users, stating it “will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”