Connect with us

Tech

Journalistic NGO calls out Apple for false Apple Intelligence headlines

Published

on

Journalistic NGO calls out Apple for false Apple Intelligence headlines

Not even Apple is safe against artificial intelligence hallucinations. We’ve seen that happening quite frequently with Google’s Gemini (like the platform telling users to put glue on pizza), Microsoft, and OpenAI’s ChatGPT.

Apple even has a prompt trying to prevent its Apple Intelligence platform from hallucinating, but it doesn’t mean it won’t get a few things wrong. Now, BBC News reports that the journalistic NGO Reporters Without Borders has called on Apple to stop using notifications summary.

This feature was introduced with iOS 18.1 and improved with iOS 18.2. With it, Apple summarizes all your notifications on a single stack. With that, you can catch up with your iMessage groups, X notifications, and so on pretty quickly.

However, users have already noticed that every once in a while, it gets things wrong. Top journalist Joanna Stern once was surprised by discovering her wife had a husband (Apple Intelligence assumed she was talking about a man, not a woman), and other people have shared not so sensitive images of breakups through Apple Intelligence.

Unfortunately, it seems Apple Intelligence hallucinations have become more frequent, as with the recent Luigi Mangione case. The man accused of murdering a healthcare insurance CEO, the platform has notified users (through BBC News push notifications) that he had killed himself in prison, which didn’t happen.

The publication writes:

The BBC made a complaint to the US tech giant after Apple Intelligence, which uses artificial intelligence (AI) to summarise and group together notifications, falsely created a headline about murder suspect Luigi Mangione. The AI-powered summary falsely made it appear that BBC News had published an article claiming Mangione, the man accused of the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.

In addition, Reporters Without Borders released an statement that this incident proves that “generative AI services are still too immature to produce reliable information for the public.” It goes on: “RSF calls on Apple to act responsibly by removing this feature. The automated production of false information attributed to a media outlet is a blow to the outlet’s credibility and a danger to the public’s right to reliable information on current affairs.”

Apple hasn’t commented on that matter. Still, future Apple Intelligence hallucinations could cause harm to other people, spread fake news, or even change the market.

It’s likely that Apple will keep improving Apple Intelligence’s algorithm. However, it could be safer if the company prevented its AI platform from summarizing notification news for the time being.

BGR will let you know if we hear from Apple.

Continue Reading