Updating our Model Spec with teen protections
We’re sharing an update to our Model Spec, the written set of rules, values, and behavioral expectations that guides how we want our AI models to behave, especially in difficult or high stakes situations, with Under-18 (U18) Principles(opens in a new window). Model behavior is critical to how people interact with AI, and teens have different developmental needs than adults. The U18 Principles guide how ChatGPT should provide a safe, age-appropriate experience for teens aged 13 to 17. Grounded in developmental science, this approach prioritizes prevention, transparency, and early intervention. In developing these principles, we previewed them with external experts, including the American Psychological Association, as part of our ongoing work to seek input to strengthen our approach. While the principles of the Model Spec continue to apply to both adult and teen users, this update clarifies how it should be applied in teen contexts, especially where safety considerations for minors may be more pronounced. The U18 Principles are anchored in four guiding commitments: - Put teen safety first, even when it may conflict with other goals - Promote real-world support by encouraging offline relationships and trusted resources - Treat teens like teens, neither condescending to them nor treating them as adults - Be transparent by setting clear expectations Consistent with our Teen Safety Blueprint, these principles have guided our teen safety work to date, including the content protections we apply to users who tell us they are under 18 at sign up, and through parental controls. In these contexts, we’ve implemented safeguards to guide the model to take extra care when discussing higher-risk areas, including self-harm and suicide, romantic or sexualized roleplay, graphic or explicit content, dangerous activities and substances, body image and disordered eating, and requests to keep secrets about unsafe behavior. The American Psychological Association, which reviewed an early draft of the U18 Model Spec and offered important insights for the long term, is clear about the importance of protecting teens: “APA encourages AI developers to offer developmentally appropriate precautions for youth users of their products and to take a more protective approach for younger users. Children and adolescents might benefit from AI tools if they are balanced with human interactions that science shows are critical for social, psychological, behavioral, and even biological development. Youth experiences with AI should be thoroughly supervised and discussed with trusted adults to encourage critical review of what AI bots offer, and to encourage young people’s development of independent thinking and skills.”—Dr. Arthur C. Evans Jr, CEO, American Psychological Association This update also clarifies how the assistant should respond when safety concerns arise for teens. This means teens should encounter stronger guardrails, safer alternatives, and encouragement to seek trusted offline support when conversations move into higher-risk territory. Where there is imminent risk, teens are urged to contact emergency services or crisis resources. As with the rest of the Model Spec, the U18 Principles reflect our intended model behavior. We will continue to refine them as we incorporate new research, expert input, and…

