When Meta sold Ray Ban smart glasses with the promise they were built with privacy in mind, the privacy design centered on a small LED light on the frame. Behind the scenes, the data pipeline routes footage from homes and everyday environments to a contractor in Nairobi, Kenya, where workers review and label the content to train Meta’s artificial intelligence systems.
That is the finding of a joint investigation by Swedish newspapers Svenska Dagbladet and Göteborgs Posten, based on testimony from workers at Sama, a Kenyan data annotation company subcontracted by Meta to label footage captured through the AI glasses.
“We see everything, from living rooms to naked bodies. Meta has that type of content in its databases,” one worker told the Swedish journalists. “In some videos you can see someone going to the toilet, or getting undressed. I do not think they know, because if they knew, they would not be recording,” another contractor said.
One worker described reviewing footage in which a wearer placed the glasses on a bedside table. The wearer’s spouse later entered the room and undressed, apparently unaware the device had captured the moment. Other recordings reportedly included bank cards filmed by accident, people watching explicit content, and footage of sexual activity.
Workers say they are expected to review the material without questioning it.
“You understand it is someone’s private life you are looking at, but at the same time you are expected to carry out the work. If you start asking questions, you are gone,” one contractor said.
Ray Ban Meta smart glasses sold more than seven million pairs in 2025. The devices capture first person footage when the AI assistant is activated. Human annotators train Meta’s AI systems by labeling and categorizing objects, scenes, and interactions within images and videos. The material sent for review often includes anything the camera captured, whether the wearer intended to record it or not.
“You think that if they knew about the extent of the data collection, no one would dare to use the glasses,” one annotator said.
Meta’s terms of service reserve the right to conduct manual human review of AI interactions. That clause forms the legal basis for sending user recordings to contractors for training and quality checks. Privacy advocates say many users do not realize the camera is recording when they activate the AI assistant, meaning sensitive footage can be captured unintentionally.
Data protection lawyer Kleanthi Sardeli warned that once the footage enters training systems, user control becomes limited.
“Once the material has been fed into the models, the user in practice loses control over how it is used,” she said.
Meta says automated face blurring helps protect identities within training data. Workers involved in the process say the system does not always work as intended. According to them, faces and bodies sometimes remain visible, particularly in poor lighting conditions.
That means individuals recorded without their knowledge could be identifiable to people reviewing the footage in other parts of the world.
Following public criticism, the company ended some moderation work and shifted toward computer vision annotation. That type of annotation now includes reviewing footage generated by Meta’s smart glasses. Workers at the facility operate under strict non disclosure agreements. Offices use surveillance cameras, and personal recording devices are prohibited. According to employees, these restrictions leave them with few options if they want to report concerns.
Internal Meta planning documents reportedly show interest in adding facial recognition capabilities to future versions of the glasses. Critics say such features could raise new privacy risks, particularly if current safeguards fail to reliably obscure identities in training data.
For millions of users who purchased the glasses believing the small recording indicator protected their privacy, the investigation raises deeper questions about how AI wearables collect and process data behind the scenes.
