Connect with us

Tech

Google ‘taking swift action’ to remove bizarre AI search results — like telling users to eat rocks

Published

on

Google ‘taking swift action’ to remove bizarre AI search results — like telling users to eat rocks

Google is scrambling to remove a wave of incorrect or even dangerous answers from its controversial AI-powered search results — including one answer that advised users to eat rocks for nutrition.

The embattled Big Tech giant has faced a major backlash for spreading misinformation since the tool, dubbed AI Overviews, was rolled out in the US this month.

Google has said the software will reach more than one billion users by the end of the year.

In one widely circulated example, AI Overviews allegedly responded to the query, “how many rocks should a child eat?” by falsely claiming UC Berkeley geologists recommend “eating at least one small rock per day.”

Google advised eating at least one rock per day. X / @

The answer appeared to originate from a satirical post by The Onion.

Google said it is “taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

“The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web,” the Google spokesperson said in a statement.  “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”

Google said many instances of strange responses were examples of so-called data voids, where there was a lack of high-quality information available online for the particular query.

The company said it makes thousands of improvements to search each year.

As The Post reported, the chatbot, which critics say could devastate news publishers, has generated bizarre responses such as telling users to add glue to pizza sauce and claiming that tobacco has health benefits for kids.

Google falsely claimed that former president Andrew Johnson earned 14 degrees from a university he never attended. X / @

“Mixing cheese into the sauce helps add moisture to the cheese and dry out the sauce,” Google’s AI Overview said. “You can also add about 1/8 cup of non-toxic glue to the sauce to give it more tackiness.”

The glue recommendation appeared to have been pulled from an 11-year-old Reddit post on how to keep cheese from sliding off pizza.

AI experts have warned of major risks associated with the rapidly growing technology – from spreading misinformation through false claims known as “hallucinations” to potentially threatening humanity’s existence altogether.

AI Overviews allegedly claimed Google violates antitrust law. X / @bcmerchant

“Google Search is Google’s flagship product and cash cow,” said Gergely Orosz, author of the Pragmatic Engineer newsletter. “It’s the one property Google needs to keep relevant/trustworthy/useful. And yet, examples on how AI overviews are turning Google search into garbage are all over my timeline.”

In one case, AI Overviews claimed that former US President Andrew Johnson, who died in 1875, had obtained 14 degrees from the University of Wisconsin-Madison, including one as recently as 2012.

Johnson never attended the school.

Google’s AI-powered results also responded to the query “How many muslim presidents has the US had?” by falsely claiming it has “had one Muslim president, Barack Hussein Obama.”

Google said AI Overviews will reach more than one billion users globally by the end of the year. AP
Google CEO Sundar Pichai is pictured. AP

Tech journalist Brian Merchant posted a screenshot in which Google’s own AI search results claimed that the company has violated antitrust law, citing the Justice Department’s pending lawsuits.

A previous iteration of Google’s AI chatbot, Bard, made the same claim last November.

The Post could not immediately verify the authenticity of every screenshot.

“The fact that it’s so hard to tell which AI Overviews are real or not (yes, I understand Google can check but the general public cannot) — is a massive misinformation risk in and of itself,” said Lily Ray, an online search expert who has tracked several examples of actual or doctored answers from AI Overviews on her X account.

Continue Reading