Meta Faces Privacy Scrutiny Over Workers Reviewing AI Glasses Footage
European regulators seek details after investigation says data annotators reviewed recordings from Ray-Ban Meta smart glasses.
Topics
News
- MIT Sloan Management Review Brings Its Global AI Research Forum to India
- Poonawalla Fincorp Launches AI Platform for Customer Service
- Google in Talks with Marvell to Develop AI Chips, Report Says
- Sequoia Raises $7 Billion for Bigger AI Bets
- Anthropic Releases Claude Opus 4.7 With Safeguards
- AI Dispatch | 10–16 April 2026
[Image source: Diksha Mishra/MITSMR India]
Privacy regulators in the UK and several European jurisdictions are seeking answers from Meta after an investigation found contractors reviewing footage from its Ray-Ban AI glasses were exposed to highly intimate recordings, intensifying scrutiny of wearable devices that capture personal data for AI training.
The inquiry follows a joint report by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, which examined how contractors review footage captured by Meta’s AI glasses.
The workers were data annotators employed by outsourcing firm Sama in Nairobi and were tasked with reviewing images and videos to help train Meta’s artificial intelligence systems.
The UK’s Information Commissioner’s Office said it would contact Meta to request details on how the company is complying with data protection rules.
“The claims in this article are concerning. We will be writing to Meta to request information on how it is meeting its obligations under UK data protection law,” the regulator said in a statement.
The glasses, developed in partnership with Ray Ban, allow users to capture images and video through a built-in camera and use Meta’s AI assistant to answer questions about what the wearer is seeing.
According to Meta, some of the content shared with the AI system may be reviewed by human contractors to improve the service.
“When people share content with Meta AI, like other companies we sometimes use contractors to review this data to improve people’s experience with the glasses, as stated in our Privacy Policy,” Meta told the BBC.
The company added that the content is filtered before review in an effort to protect privacy.
“This data is first filtered to protect people’s privacy,” Meta said, noting that measures such as blurring faces may be applied before material is analyzed.
However, workers interviewed during the Swedish investigation said those protections did not always function as intended. One contractor told the newspapers that the material being reviewed often included private scenes from users’ daily lives.
“We see everything from living rooms to naked bodies,” one worker said.
Workers described footage of individuals using the toilet, undressing or engaging in sexual activity.
In one case, a pair of glasses left recording in a bedroom captured a woman changing clothes after the wearer had left the room.
Other clips reportedly included images of bank cards, people watching pornography and other intimate moments.
Some annotators said they felt obligated to continue reviewing such material despite discomfort.
“You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work,” one employee told the Swedish newspapers. “You are not supposed to question it. If you start asking questions, you are gone.”
Workers said their workplace imposed strict security measures, including constant camera monitoring and bans on mobile phones, but the material they handled frequently involved highly sensitive personal information.
Meta said users must actively trigger recording through a voice command or manual action. The glasses also include a small light on the frame that turns on while recording.
The company advises users to avoid recording in private settings and to make others aware when the recording indicator is active.
Still, critics argue that people appearing in recordings may not realize they are being filmed or that the footage could be reviewed by third party contractors.
The Swedish investigation also highlighted that Meta’s terms of service allow human review of interactions with its AI systems.
The company’s policies state that in some cases it may review conversations or media shared with its AI either through automated systems or manual human review.
Privacy advocates say many users may not be aware of this process.
The issue also highlights the broader reliance of technology companies on data annotation work carried out in countries such as Kenya, India and Colombia.
Workers are often tasked with labeling images, videos and text so that AI systems can interpret information correctly.
The practice has drawn scrutiny in the past, particularly when annotators are required to review disturbing or highly personal material.
Sama, the outsourcing firm involved in the report, began as a nonprofit focused on creating technology jobs in developing countries and is designated as an ethical B corporation. The company previously faced criticism over content moderation work and later stopped offering those services.
The controversy comes as Meta’s AI glasses gain traction among consumers. The company has sold more than seven million pairs of Ray Ban Meta glasses in 2025, a sharp increase from the roughly two million units sold across 2023 and 2024 combined.
While the devices allow users to capture first person footage and interact with AI tools through voice commands, the technology has also raised concerns about privacy and misuse.
Women have previously told the BBC they were filmed without their consent by users wearing smart glasses.
The UK data regulator said companies developing such technology must be transparent with users about what information is collected and how it is handled.
“Devices processing personal data, including smart glasses, should put users in control and provide for appropriate transparency,” the Information Commissioner’s Office said. “Service providers must clearly explain what data is collected and how it is used.”


