Connect with us

Tech

Meta confirms it may train its AI on any image you ask Ray-Ban Meta AI to analyze | TechCrunch

Published

on

Meta confirms it may train its AI on any image you ask Ray-Ban Meta AI to analyze | TechCrunch

We recently asked Meta if it trains AI on photos and videos that users take on the Ray-Ban Meta smart glasses. The company originally didn’t have much to say.

Since then, Meta has offered TechCrunch a bit more color.

In short, any image you share with Meta AI can be used to train its AI.

“[I]n locations where multimodal AI is available (currently US and Canada), images and videos shared with Meta AI may be used to improve it per our Privacy Policy,” said Meta policy communications manager Emil Vazquez in an email to TechCrunch.

In a previous emailed statement, a spokesperson clarified that photos and videos captured on Ray-Ban Meta are not used by Meta for training as long as the user doesn’t submit them to AI. However, once you ask Meta AI to analyze them, those photos fall under a completely different set of policies.

In other words, the company is using its first consumer AI device to create a massive stockpile of data that could be used to create ever-more powerful generations of AI models. The only way to “opt out” is to simply not use Meta’s multimodal AI features in the first place.

The implications are concerning because Ray-Ban Meta users may not understand they’re giving Meta tons of images – perhaps showing the inside of their homes, loved ones, or personal files – to train its new AI models. Meta’s spokespeople tell me this is clear in the Ray-Ban Meta’s user interface, but the company’s executives either initially didn’t know or didn’t want to share these details with TechCrunch. We already knew Meta trains its Llama AI models on everything Americans post publicly on Instagram and Facebook. But now, Meta has expanded this definition of “publicly available data” to anything people look at through its smart glasses and ask its AI chatbot to analyze.

This is particularly relevant now. On Wednesday, Meta started rolling out new AI features that make it easier for Ray-Ban Meta users to invoke Meta AI in more natural way, meaning users will be more likely to send it new data that can also be used for training. In addition, the company announced a new live video analysis feature for Ray-Ban Meta during its 2024 Connect conference last week, which essentially sends a continuous stream of images into Meta’s multimodal AI models. In a promotional video, Meta said you could use the feature to look around your closet, analyze the whole thing with AI, and pick out an outfit.

What the company doesn’t promote is that you are also sending these images to Meta for model training.

Meta spokespeople pointed TechCrunch towards its privacy policy, which plainly states: “your interactions with AI features can be used to train AI models.” This seems to include images shared with Meta AI through the Ray-Bans smart glasses, but Meta still wouldn’t clarify.

Spokespeople also pointed TechCrunch towards Meta AI’s terms of service, which states that by sharing images with Meta AI, “you agree that Meta will analyze those images, including facial features, using AI.”

Meta just paid the state of Texas $1.4 billion to settle a court case related to the company’s use of facial recognition software. That case was over a Facebook feature rolled out in 2011 called “Tag Suggestions.” By 2021, Facebook made the feature explicitly opt-in, and deleted billions of people’s biometric information it had collected. Notably, several of Meta AI’s image features are not being released in Texas.

Elsewhere in Meta’s privacy polices, the company states that it also stores all the transcriptions of your voice conversations with Ray-Ban Meta, by default, to train future AI models. As for the actual voice recordings, there is a way to opt-out. When you first login to the Ray-Ban Meta app, users can choose whether voice recordings can be used to train Meta’s AI models.

It’s clear that Meta, Snap, and other tech companies are pushing for smart glasses as a new computing form factor. All of these devices feature cameras that people wear on their face, and they’re mostly powered by AI. This rehashes a ton of privacy concerns we first heard about in the Google Glass era. 404 Media reported that some college students have already hacked the Ray-Ban Meta glasses to reveal the name, address, and phone number of anyone they look at.

Continue Reading