Connect with us

Tech

Apple’s AI Disastrously Rewrote a BBC Headline to Say Luigi Mangione Shot Himself

Published

on

Apple’s AI Disastrously Rewrote a BBC Headline to Say Luigi Mangione Shot Himself

Apple has only just begun rolling out a much-hyped suite of AI features for its devices, and we are already seeing major problems. Case in point, the BBC has complained to Apple after an AI-powered notification summary rewrote a BBC headline to say the UHC CEO’s murderer Luigi Mangione had shot himself. Mangione did not shoot himself and remains in police custody.

Apple Intelligence includes a feature on iOS that tries to relieve users of fatigue by bundling and summarizing notifications coming in from individual apps. For instance, if a user receives multiple text messages from one person, instead of displaying them all in a long list, iOS will now try and summarize the push alerts into one concise notification.

It turns out—and this should not surprise anyone familiar with generative AI—the “intelligence” in Apple Intelligence belies the fact that the summaries are sometimes unfortunate or just plain wrong. Notification summaries were first introduced to iOS in version 18.1 which was released back in October; earlier this week, Apple added native integration with ChatGPT in Siri.

A notification from the BBC app that includes an incorrect summary.

In an article, the BBC shared a screenshot of a notification summarizing three different stories that had been sent as push alerts. The notification reads: “Luigi Mangione shoots himself; Syrian mother hopes Assad pays the price; South Korea police raid Yoon Suk Yeol’s office.” The other summaries were correct, the BBC says.

The BBC has complained to Apple about the situation, which is embarrassing for the tech company but also risks damaging the reputation of news media if readers believe they are sending out misinformation. They have no control over how iOS decides to summarize their push alerts.

“BBC News is the most trusted news media in the world,” a BBC spokesperson said for the story. “It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications.” Apple declined to respond to the BBC’s questions about the snafu.

Artificial Intelligence has a lot of potential in many areas, but language models are perhaps one of the worst implementations. But there is a lot of corporate hope that the technology will become good enough that enterprises could rely on it for uses like customer support chat or searching through large collections of internal data. But it’s not there yet—in fact, enterprises using AI have said they still have to do lots of editing of the work it produces.

It feels somewhat uncharacteristic of Apple to deeply integrate such unreliable and unpredictable technology into its products. Apple has no control over ChatGPT’s outputs—the chatbot’s creator OpenAI can barely control the language models, and their behavior is constantly being tweaked. Summaries of short notifications should be the easiest thing for AI to do well, and Apple is flubbing even that.

At the very least, some of Apple Intelligence’s features demonstrate how AI could potentially have practical uses. Better photo editing and a focus mode that understands which notifications should be sent through are nice. But for a company associated with polished experiences, wrong notification summaries and a hallucinating ChatGPT could make iOS feel unpolished. It feels like they are rushing on the hype train in order to juice new iPhone sales—an iPhone 15 Pro or newer is required to use the features.

Continue Reading