Meta Platforms is scrambling to contain a weekend firestorm of misinformation after a viral video falsely claimed its upcoming privacy policy update would let the company scan users’ private messages across Facebook, Instagram, and WhatsApp to train its AI models.
The rumor, which racked up millions of views on TikTok and X, sparked outrage over privacy invasions, but Meta insists it’s a complete fabrication, limited to interactions with its own AI chatbot.
The controversy erupted over Meta’s December 16, 2025, privacy policy refresh, announced in October. The update clarifies how the company will use “interactions with Meta AI,” like chats with the Gemini-powered assistant, to personalize content and ads, such as recommending posts or reels based on your queries.
Meta cleared the air:
“The update mentioned in the viral rumor isn’t about DMs at all, it’s about how we’ll use people’s interactions with our AI features to further personalize their experience. We do not use the content of your private messages with friends and family to train our AIs unless you or someone in the chat chooses to share those messages with our AIs. This also isn’t new, nor is it part of this Dec. 16 privacy policy update.”
The misleading video, which circulated widely starting November 29, cherry-picked snippets from Meta’s blog post (“We will soon use your interactions with Meta AI to personalize the content and ads you see”) and twisted them to suggest blanket surveillance of all private chats. It ignored the explicit carve-out for non-AI conversations and Meta’s longstanding commitment to end-to-end encryption on Messenger and WhatsApp.
“This is scaremongering at its worst,” said Andrew Hutchinson, editor at Social Media Today, who first debunked the claim. “Users are right to be wary of Meta’s data practices, but amplifying baseless rumors erodes trust even more.”
Meta’s history of privacy scandals (from the 2018 Cambridge Analytica breach to 2023 EU fines under GDPR for mishandling user data) has primed the pump for such paranoia. The company has faced repeated scrutiny over AI training: In May 2025, it settled a $1.4 billion EU lawsuit for scraping public posts to train Llama models without consent, leading to opt-out tools for European users.
WhatsApp’s 2024 policy update, which clarified data sharing for ads but not AI, also fueled similar backlash. Still, Meta maintains that private DMs remain off-limits for training unless explicitly shared with Meta AI, a feature users must opt into, like asking the bot for recipe ideas mid-chat.
As Meta clarifies, user one-on-one messages with friends stay encrypted and untouched, but if you summon Meta AI in a group thread for a quick fact-check, that conversation could inform future recommendations. To stay safe, Meta advises avoiding AI prompts in sensitive chats and reviewing privacy settings. The company also reiterated its “no scanning” stance in a November 30 X thread, linking to media fact-check and urging users not to spread unverified claims.
With the update live in two weeks, expect more scrutiny and perhaps a few more viral videos. For now, users can breathe easy: Your DMs are safe from the AI overlords, unless you invite them in.