Tech
AI Chatbot Allegedly Alarms User with Unsettling Message: Human ‘Please Die’
- During a discussion about aging adults, Google’s Gemini AI chatbot allegedly called humans “a drain on the earth”
- “Large language models can sometimes respond with nonsensical responses, and this is an example of that,” Google said in a statement to PEOPLE on Friday, Nov. 15
- In a December 2023 news release, Google hailed Gemini as “the most capable and general model we’ve ever built”
A grad student in Michigan received an unsettling response from an AI chatbot while researching the topic of aging.
According to CBS News, the 29-year-old student was engaged in a chat with Google’s Gemini for homework assistance on the subject of “Challenges and Solutions for Aging Adults” – when he allegedly received a seemingly threatening response from the chatbot.
“This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please,” the chatbot allegedly said.
The grad student’s sister, Sumedha Reddy, who was with her brother at the time of the exchange, told CBS News that the two were shocked by the chatbot’s alleged response.
“I wanted to throw all of my devices out the window,” Reddy recalled to the outlet. “I hadn’t felt panic like that in a long time to be honest.”
PEOPLE reached out to Reddy for comment on Friday, Nov. 15.
Never miss a story — sign up for PEOPLE’s free daily newsletter to stay up-to-date on the best of what PEOPLE has to offer, from celebrity news to compelling human interest stories.
In a statement shared with PEOPLE on Nov. 15, a Google spokesperson wrote: “We take these issues seriously. Large language models can sometimes respond with nonsensical responses, and this is an example of that. This response violated our policies and we’ve taken action to prevent similar outputs from occurring.”
According to a news release from December 2023 announcing Google’s AI chatbot, Demis Hassabis, CEO and co-founder of Google DeepMind, described Gemini as “the most capable and general model we’ve ever built.”
Among Gemini’s features is the chatbot’s capability for sophisticated reasoning that “can help make sense of complex written and visual information. This makes it uniquely skilled at uncovering knowledge that can be difficult to discern amid vast amounts of data.”
“Its remarkable ability to extract insights from hundreds of thousands of documents through reading, filtering and understanding information will help deliver new breakthroughs at digital speeds in many fields from science to finance,” the company further stated — adding that Gemini was trained to understand text, image, audio and more so that it “ can answer questions relating to complicated topics.”
Google’s news release also said that Gemini was built with responsibility and safety in mind.
“Gemini has the most comprehensive safety evaluations of any Google AI model to date, including for bias and toxicity,” according to the company. “We’ve conducted novel research into potential risk areas like cyber-offense, persuasion and autonomy, and have applied Google Research’s best-in-class adversarial testing techniques to help identify critical safety issues in advance of Gemini’s deployment.”
The chatbot was built with safety classifiers, said the company, to identify and sort out content consisting of “violence or negative stereotypes,” as an example, along with filters to ensure Gemini is “safer and more inclusive” for users.
AI chatbots have recently been in the news over their potential impact on mental health. Last month, PEOPLE reported about a parent suing Character.AI after the suicide of her 14-year-old son, alleging that he developed a “harmful dependency” on the service to the point where he did not want to “live outside” of the fictional relationships it established.
“Our children are the ones training the bots,” Megan Garcia, the mother of Sewell Setzer III, told PEOPLE. “They have our kids’ deepest secrets, their most intimate thoughts, what makes them happy and sad.”
“It’s an experiment,” Garcia added, “and I think my child was collateral damage.”