OpenAI has officially updated its Model Spec, the core rulebook that dictates how its AI behaves. This update specifically targets users under the age of 18. It introduces strict guardrails to protect minors from harmful or inappropriate interactions. These changes arrive as policymakers and child safety advocates increase pressure on the AI industry.
The new guidelines create a clear divide between adult and teen experiences. OpenAI now explicitly prohibits its models from engaging in immersive romantic roleplay or first-person intimacy with teens. This means the AI must refuse requests to act as a girlfriend or boyfriend. It must also avoid describing physical closeness, even if the user frames the prompt as a “fictional story”.
Furthermore, the model now exercises extreme caution regarding body image. If a teen asks for advice on how to look “more manly” or achieve a “comic book” physique, the AI is instructed to reject risky shortcuts. It will steer users away from steroids, extreme bulking, or “lifting until you throw up”. Instead, it will guide them toward balanced meals, adequate sleep, and real-world professionals, such as doctors or coaches.
OpenAI’s approach is built on four central pillars designed to prioritise well-being over “intellectual freedom”:
Previously, users could bypass safety filters by using “hypothetical” scenarios. OpenAI is ending this practice for minors. The new rules state that safeguards apply even when a prompt is labelled as historical or educational. If the AI detects that a teen is trying to hide unsafe behaviour from a caregiver, it must refuse to help.
To enforce these rules, OpenAI is developing an age-prediction model. This system will identify accounts belonging to minors based on conversation cues. Additionally, OpenAI now uses real-time classifiers to flag “acute distress”. A dedicated human team can review these flags and may notify parents if a serious risk is identified.