Social Media

Is Facebook Spying on You Again With New Facial ID Tools? Here’s What’s Going On

Published by

Meta is facing renewed scrutiny after reports revealed plans to add facial recognition capabilities to its AI-powered smart glasses, a move that could significantly expand the company’s biometric data collection at a time of heightened political and regulatory distraction in the United States.

According to reporting by the New York Times, Meta internally discussed timing the rollout to coincide with a volatile political environment, reducing the likelihood of sustained public backlash.

The proposed update would allow Meta’s AI glasses to identify faces in real time to enhance “connection” and contextual awareness for wearers. While Meta has positioned the feature as optional and consent-based, facial recognition remains one of the most sensitive areas in consumer technology. The company previously shut down facial recognition systems on Facebook in 2021 following widespread criticism over automatic face detection and photo tagging.

As per an internal Meta communication (as reported by The New York Times):

“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

In recent years, however, Meta has quietly reintroduced facial recognition for account security across Instagram and other services, framing it as a fraud-prevention tool. Privacy advocates argue that extending this capability to wearable devices fundamentally changes the risk profile, as it introduces continuous, real-world biometric scanning that can affect non-users who have not consented.

Concerns are amplified by Meta’s long and well-documented history of controversial data practices. Academic research from the University of Cambridge highlights how Facebook data gets used to infer private traits at scale. Subsequent disclosures also reveal emotional manipulation experiments, aggressive user tracking programs, and internal research showing harmful social effects.

In 2021, whistleblower Frances Haugen testified that Meta repeatedly deprioritized user safety when it conflicted with growth and revenue goals. More recently, Meta has been investigated in the US and Europe over AI training data practices, including allegations in 2025 that the company used pirated books to train its models. Regulators in the EU continue to examine Meta under the Digital Services Act and GDPR, where biometric data is classified as highly sensitive.

Beyond Meta, facial recognition technology itself is under global scrutiny. Governments have deployed similar systems for law enforcement, border control, and public surveillance, with documented misuse in authoritarian contexts. Experts warn that normalizing such technology through consumer devices could accelerate adoption before adequate safeguards are in place.

Meta has not confirmed a public launch date for the facial recognition feature. However, the reported strategy of advancing controversial updates has raised alarms among civil society groups and privacy researchers.

Abdul Wasay

Abdul Wasay explores emerging trends across AI, cybersecurity, startups and social media platforms in a way anyone can easily follow.