OpenAI has released a Teen Safety Blueprint tailored for India, outlining how ChatGPT will respond differently to users under 18 through age-based protections, parental controls, and mental-health safeguards. Designed for shared devices, multilingual homes, and educator involvement, the framework commits to ongoing collaboration with families, experts, and policymakers.
As artificial intelligence steadily becomes a study partner, idea generator, and digital tutor for young people, OpenAI has laid out a detailed plan for how ChatGPT should behave when its user is a teenager in India. The companyโs newly released Teen Safety Blueprint for India argues that if AI is to expand opportunity at scale, it must first earn the trust of young users and the families, teachers, and communities that shape their digital lives.
The document begins with a clear premise: teens are growing up with AI, but they are not adults. The way ChatGPT responds to a 15-year-old, it argues, should be fundamentally different from how it responds to an adult. For minors, safety must take precedence over privacy and freedom. โThis is a new and powerful technology, and we believe minors need significant protection,โ the blueprint states.
This approach is especially contextualised for India. The blueprint references findings from the RATI Foundationโs Ideal Internet Report 2024โ25, which notes that 62% of Indian teens use shared devices and that most safety tools across platforms assume English-speaking, single-user, private-device environments. In reality, Indian teens navigate multilingual, shared, and often supervised digital spaces, where parents, siblings, and teachers are closely involved. Many existing digital safeguards, the report suggests, simply do not translate well into this environment.
OpenAIโs framework is built around this reality. It proposes that protections for teens should be โby default,โ rather than optional settings that depend on users or parents discovering and enabling them. At the core of the blueprint is the idea of identifying teen users on the platform through privacy-protective, risk-based age estimation tools. Where sufficient information is not available to determine age, the system would default to more protective safeguards.
The aim is straightforward: treat teens like teens and adults like adults. Age estimation, according to the document, allows AI systems to apply the right safety policies to the right users without collecting excessive personal data. It also opens the door to age-appropriate system behaviour rules that are automatically applied to users under 18.
These under-18 policies are described in concrete terms. For teen users, ChatGPT should not depict suicide or self-harm, should not instruct or encourage dangerous stunts or access to illegal substances, and should avoid reinforcing harmful body ideals through appearance ratings, body comparisons, or restrictive diet coaching. Graphic, immersive, or role-playing violent and intimate content is also prohibited. The policies recognise that teens have developmental needs that differ significantly from adults and that AI systems must reflect that difference in how they generate responses.
Beyond content moderation, the blueprint places significant emphasis on parental and educator involvement. OpenAI argues that in India, where families and schools play a central role in shaping young peopleโs digital experiences, AI safeguards must complement existing support systems rather than replace them. To this end, ChatGPTโs parental controls are positioned as a key layer of protection.
These controls allow parents to manage privacy and data settings, including the ability to turn off memory and chat history so that conversations do not persist across sessions. Parents can link their account with their teenโs through a simple email invitation, receive alerts if their teenโs activity suggests self-harm intent, set blackout hours to ensure breaks from screen time, and control how ChatGPT responds to their teen through built-in age-appropriate behaviour rules. ChatGPT, the blueprint reiterates, is designed for users aged 13 and above.
Mental health and well-being form another pillar of the framework. OpenAI describes features already present for all users, such as in-app reminders during long sessions to encourage breaks and systems that direct users to real-world resources if suicidal intent is detected. For teen users, these pathways are intended to be even more prominent. The company highlights its commitment to collaborating with mental health and child safety organisations, establishing advisory councils of external experts, and supporting independent research into how AI affects teen well-being.
The blueprint also references OpenAIโs existing work on preventing AI-generated child sexual abuse material (CSAM) and child sexual exploitation material (CSEM), noting that detection methods are constantly improved and confirmed cases are reported to relevant authorities. These protections, the document suggests, are part of a broader responsibility that AI companies must uphold when serving younger audiences.
Importantly, OpenAI positions the Teen Safety Blueprint not as a finished policy but as an evolving framework. The company commits to sustained engagement across India with teens, parents, educators, child-safety experts, mental health practitioners, researchers, and policymakers to continuously test and refine these protections. As AI adoption accelerates, the blueprint argues, AI literacy must be taught alongside traditional subjects so that teens understand not just how to use AI, but where it can fail and how to use it safely.
The document also aligns itself with Indiaโs broader AI governance discussions, noting that coordinated institutional efforts across regulators and standard-setting bodies are essential to make such frameworks effective. OpenAI signals a willingness to work with policymakers and advocacy groups to shape public policies that promote teen safety in AI environments.
Throughout, there is a recognition that Indian teensโ experiences are diverse, shaped by region, language, and socio-economic context. OpenAI states that it intends to learn directly from teens themselves, listening to their concerns and aspirations as AI becomes part of everyday life at home and at school. This feedback loop, along with ongoing research and expert consultation, is intended to guide future iterations of the safeguards.
Discover more from Creative Brands
Subscribe to get the latest posts sent to your email.





