Meta promotes its Ray-Ban smart glasses on its website as a product developed “with your privacy in mind,” saying users retain control over what is shared and when. However, the terms for the glasses’ AI services grant the company broad rights. While voice recordings are only stored for product improvement with active user consent, the AI assistant automatically processes speech, text, images, and in some cases video in order to function, and that data may also be shared further. This processing cannot be disabled.
According to an investigation by Svenska Dagbladet and Göteborgs-Posten, video data from the AI glasses is being reviewed by data annotators working for Sama, a Meta contractor based in Nairobi, Kenya. These workers help train Meta’s AI systems by identifying, labeling, and categorizing objects in images and videos.
Nude bodies, bank cards, sex scenes
What appears on their screens reportedly goes far beyond Instagram reels and family videos. Several workers told the newspapers that they had seen clips showing people leaving bathrooms naked, getting dressed, or having sex while the glasses were recording. “We see everything, from living rooms to naked bodies. Meta has this kind of content in its databases,” one worker said.
Other recordings reportedly show accidentally filmed bank cards or users watching pornography while wearing the glasses. Transcriptions are also part of the job: annotators check whether the AI assistant responded correctly. In doing so, they say they encounter chats about crime, protests, and sexual content. “It’s not just greetings, it can be very dark things,” one employee said in the investigation.
The workers have signed extensive non-disclosure agreements. Cameras are reportedly installed throughout the offices, while personal phones and recording devices are banned. According to employees, asking too many questions can put their jobs at risk and, for many, mean falling back into poverty.
AI training needs human eyes - and that is becoming a problem
For Meta’s glasses to recognize objects, understand speech, and interpret scenes, humans still need to prepare the raw training data.
Former Meta employees in the United States told the Swedish journalists that sensitive data is not supposed to be used for AI training. Faces in annotation datasets, they said, are meant to be blurred automatically. However, data annotators in Kenya reported that anonymization does not always work. Faces that should have been obscured sometimes remain visible. “The algorithms fail sometimes. Especially in difficult lighting conditions, certain faces and bodies become visible,” a former Meta employee said.
Kleanthi Sardeli, a privacy lawyer at the Vienna-based organization None Of Your Business (NOYB), which has already filed several complaints against Meta, said the case points to a clear transparency problem. Users may not realize that the camera is recording when they activate the AI assistant. The type of videos handled by Sama is, in her view, a strong indication of that.
“If this is happening in Europe, then both transparency and a legal basis for the processing are missing,” she said. For AI training, explicit consent should be required. “Once the material has been fed into the models, the user practically loses control over how it is used.”
Kenya does not currently have an EU adequacy decision. A formal dialogue between the EU and Kenya only began in May 2024. Meta states in its privacy policy that user data may be transferred, stored, and processed globally because the company “operates worldwide.” Petra Wierup, a lawyer at Sweden’s data protection authority IMY, said that if Meta is the controller under the GDPR, then the same level of protection must apply even when subcontractors in third countries are involved.
Sama: a familiar name with a controversial track record
Sama is no stranger to controversy. In 2021, the company labeled tens of thousands of text passages containing sexual abuse, violence, and hate speech for OpenAI. According to a TIME investigation, Kenyan workers at the time were paid around $1.32 to $2 per hour. One worker described the experience as “torture.” Sama also helped label data for autonomous vehicles and was previously involved in Facebook content moderation.
After further reports exposed trauma and alleged union-busting in Sama’s Nairobi office, the company ended that content moderation work for Meta in 2023 and shifted its focus to computer vision data annotation — the very kind of work now relevant to Meta’s AI glasses.
That annotation work is increasingly assisted by AI itself. For training its computer vision model SAM 3, Meta developed a “Data Engine” in which AI models first generate segmentation suggestions that are then reviewed and corrected by both human and AI annotators. The company says this process significantly speeds up annotation.
ES
EN