Tech
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google’s trying to find the guy responsible for all this.
Isn’t It Ironic?
Google researchers have come out with a new paper that warns that generative AI is ruining vast swaths of the internet with fake content — which is painfully ironic because Google has been hard at work pushing the same technology to its enormous user base.
The study, a yet-to-be-peer-reviewed paper spotted by 404 Media, found that the great majority of generative AI users are harnessing the tech to “blur the lines between authenticity and deception” by posting fake or doctored AI content, such as images or videos, on the internet. The researchers also pored over previously published research on generative AI and around 200 news articles reporting on generative AI misuse.
“Manipulation of human likeness and falsification of evidence underlie the most prevalent tactics in real-world cases of misuse,” the researchers conclude. “Most of these were deployed with a discernible intent to influence public opinion, enable scam or fraudulent activities, or to generate profit.”
Compounding the problem is that generative AI systems are increasingly advanced and readily available — “requiring minimal technical expertise,” according to the researchers, and this situation is twisting people’s “collective understanding of socio-political reality or scientific consensus.”
Missing from the paper, as far as we can tell? Any reference to Google’s own embarrassing blunders using the tech — which, as one of the biggest companies on Earth, have sometimes been enormous in scale.
Forecast: Cloudy
If you read the paper, you can’t help but conclude that the “misuse” of generative AI often sounds a lot like the tech is working as intended. People are using generative AI to make lots of fake content because it’s really good at doing that task, and consequently flooding the internet with AI slop.
And this situation is enabled by Google itself, which has allowed this fake content to proliferate or even has been the source of it, whether it’s fake images or false information.
This mess is also testing people’s capacity to discern fake from the real, according to the researchers.
“Likewise, the mass production of low quality, spam-like and nefarious synthetic content risks increasing people’s scepticism towards digital information altogether and overloading users with verification tasks,” they write.
And chillingly, because we’re being inundated with fake AI content, the researchers say there have been instances when “high profile individuals are able to explain away unfavourable evidence as AI-generated, shifting the burden of proof in costly and inefficient ways.”
As companies like Google continue to cram AI into every product, expect more of all this.
More on Google: Google Caught Manually Taking Down Bizarre AI Answers