AI

Google to launch AI Chatbot Gemini for Children Under 13

Google is preparing to launch its Gemini chatbot for children under the age of 13. The rollout is set to begin next week, targeting users with parent-managed Google accounts through Family Link.

A New Generation of Users

In an email sent to parents, the tech giant shared:

“Gemini Apps will soon be available for your child.”

The message further explained:

“That means your child will be able to use Gemini” to ask questions, get help with homework, and create imaginative stories.

Only children whose accounts are supervised via Google Family Link — a platform that enables parents to manage access to Gmail, YouTube, and other services — will be able to interact with Gemini. Parents are required to provide personal details such as their child’s name and birth date to create these accounts.

Safety Measures and Data Handling

Google spokesperson Karl Ryan confirmed that Gemini has built-in safety features tailored for younger users, aiming to restrict the generation of inappropriate or harmful content. Ryan also emphasized that the company will not use data from child accounts to train the AI.

Despite these safeguards, the company has acknowledged potential concerns. In its communication to families, Google cautioned:

Gemini can make mistakes,” and advised parents to “help your child think critically” about the chatbot’s responses.

The message encouraged parents to reinforce that “Gemini isn’t human” and to “not enter sensitive or personal info in Gemini.”

Balancing Innovation and Risk

As AI chatbots like Gemini become increasingly integrated into educational tools and daily use, their reach among children is growing rapidly. Google joins a race among tech companies to engage young audiences, with AI-powered platforms now serving as study assistants, story companions, and more.

However, children’s rights organizations and experts remain cautious. Groups like UNICEF have flagged the potential dangers of generative AI, which may confuse, misinform, or manipulate children who are unable to distinguish between machine and human interaction.

UNICEF’s global research office stated:

Generative AI has produced dangerous content,” emphasizing the urgent need for strong ethical guardrails when deploying such tools for children.

Parental Controls and Legal Compliance

Under the new Gemini rollout, children will initially be able to access the chatbot independently, though Google has promised to alert parents when their child first uses it. Parents will then have the ability to adjust settings or disable access altogether.

The email reassures families:

“We’ll also let you know when your child accesses Gemini for the first time.”

Ryan noted that the initiative complies with the Children’s Online Privacy Protection Act (COPPA), a U.S. federal law that mandates parental consent before collecting data from children under 13.

This move is part of a wider trend among tech giants to create child-focused digital products. Google previously introduced YouTube Kids in 2015, which quickly became a favorite among families. Meanwhile, competitors like Meta faced backlash over proposed platforms for younger audiences, such as the now-paused Instagram Kids project.

Notably, companies including Google, Amazon, and Microsoft have paid hefty fines for violating child data privacy laws in the past, reinforcing the need for strict compliance as they expand into the youth market.

Final Thoughts

While Gemini’s introduction to a younger demographic marks a significant evolution in AI accessibility, it also raises critical questions about the ethical deployment of such tools. The balance between innovation and child safety remains delicate, with parents urged to stay actively involved in guiding their children’s digital experiences.