Tech
Is this how Google fixes the big problem caused by its own AI photos?
I never liked Google’s AI-powered photo-editing features, but I get why they’re important. Features like the Magic Editor in Google Photos let you create fake photos of things that never happened, but they prove that Google is good at AI. As hot as AI is right now in the tech world, that’s obviously something the company needs to prove.
Google didn’t stop there. The Pixel 9 series comes with new genAI features for images that are even more troubling. With Pixel Studio, you can generate all sorts of pictures with a single prompt, including offensive content.
More troubling still is the Reimagine feature, which lets you edit real photos with AI. The results are incredible, and they could easily fool unsuspecting people. This opens the door to abuse, as some people might use AI-edited photos to mislead the public.
Many people, including yours truly, pointed out these issues a few months ago and called for fixes. I actually welcomed the fact that Apple Intelligence won’t bring similar features to the iPhone 16, though Apple might one day want to prove it can let you edit photos with AI just like Google.
Meanwhile, Google seems to have been working on ways to defend the world against AI creations that can come from Pixel 9 and other devices. Specifically, Google is working on giving the Google Photos app the ability to identify AI-generated content.
Some AI-generated images might have watermarks, depending on the platform. Some will also include metadata to indicate to the user that the photo is fake. But not all internet users are savvy enough to look for either. Also, images can easily be manipulated to remove those safeguards.
That’s where capabilities like the ones found in an unreleased version of the Google Photos app for Android might come in handy. Discovered by Android Authority, Google Photos v7.3 contains code entries that indicate Google might be looking to show how images were created.
It’s unclear how it will work, but Google Photos might tell the user whether an image has been produced with generative AI. It might even identify the source.
If you use Google Gemini to create images or edit photos with Magic Editor, the EXIF data will include tags telling an advanced user those images were created with Google AI. But what if Google is building a more advanced AI image identification engine in Google Photos? That would be even better for the entire industry. After all, people share all sorts of images. Having Google Photos produce a warning that an image is fake would be fantastic.
That said, it’s unclear whether Google Photos would serve that information directly or whether you’d have to hunt for it. I’d prefer the former scenario, where Google Photos proactively tells the user that an image is either generated or edited using AI.
We’ll soon find out how the feature works as long as Google makes it available in a future Google Photos update. If Google goes this route, it would be great to see others in the industry adopt similar features for their photo apps.