ChatGPT starts rolling out parental controls to protect teenage users

Earlier this month, OpenAI said it would be introducing parental controls for ChatGPT following an incident and lawsuit involving a teenager who allegedly used ChatGPT to plan and carry out his own suicide. That day is now here, with OpenAI rolling out ChatGPT parental controls.
The feature allows parents to link their ChatGPT accounts with their child’s account and customize ChatGPT’s settings to create a safer, more age-appropriate experience for under-age users.
Parents can establish quiet hours, restrict access to voice control and image generation functions, and decide whether their teen’s conversations with ChatGPT can be used for training its AI models. However, parents can’t see the linked account’s chat history, preserving the privacy of the children and their activities.
Linking ChatGPT accounts also activates enhanced protections that automatically block discussions involving graphic content, sexual or violent role play, viral challenges, and extreme beauty ideals.
In addition, there’s a warning function that can be triggered when ChatGPT detects signs of self-harming behavior, which will then alert the linked parents. In extreme cases, authorities may also be contacted.
This article originally appeared on our sister publication PC för Alla and was translated and localized from Swedish.




