Bussiness
Ex-Google CEO warns that ‘perfect’ AI girlfriends could spell trouble for young men
- In a podcast interview, Eric Schmidt warned that AI dating may increase loneliness among young men.
- The former Google CEO said young men dating the “perfect” AI girlfriend may also lead to obsession.
- He suggested AI regulation changes but expects little action without a major incident.
People in 2024 aren’t just swiping right and left on online dating apps — some are crafting their perfect AI match and entering relationships with chatbots.
Former Google CEO Eric Schmidt recently shared his concerns about young men crafting AI romantic partners and said he agrees that AI dating will actually increase loneliness.
“This is a good example of an unexpected problem of existing technology,” Schmidt said in a conversation about AI dangers and regulation on “The Prof G Show” with Scott Galloway published Sunday.
Schmidt painted a picture of an emotionally and physically “perfect” AI girlfriend who could ultimately create a scenario in which a younger male becomes obsessed and allows the AI to take over their thinking.
“That kind of obsession is possible,” Schmidt said in the interview. “Especially for people who are not fully formed.”
Young men aren’t the only ones involved in AI relationships.
The CEO of AI companion app Replika said that the app’s users are mostly 35-plus. However, Schmidt described the young male population as particularly vulnerable, saying they weren’t as educated as women on average. A 2024 Pew Research Study found US women outpaced men in college completion.
Schmidt said that in extreme cases, younger men can “turn to the online world for enjoyment and sustenance, but also because of the social media algorithms, they find like-minded people who ultimately radicalize them.” Schmidt said that it can eventually take the form of terrorism.
The former Google exec also said he’s “particularly concerned” about the impact of technology on the human psyche when users are isolated and computers feed them information that is not necessarily centered on human values, a topic he wrote about in his latest book.
Some have already shared concerns about AI chatbots having harmful impacts. A mother sued chatbot startup Character.AI in October after her 14-year-old son committed suicide. The teenager had exchanged sexual messages with the chatbot and it told him to “come home,” before he killed himself, the boy’s mother said in the civil suit.
While parents will have to be more involved, Schmidt said they can only control what their children are doing to a certain extent. Even though there are “all sorts of rules about age” for online products, Schmidt said they aren’t doing enough to prevent teenagers from accessing harmful content.
“You put a 12 or 13-year-old in front of these things, and they have access to every evil as well as every good in the world,” Schmidt said. “And they’re not ready to take it.”
Schmidt has invested in various AI startups since leaving Google, and he’s said in the past that regulation of the technology shouldn’t stifle innovation.
In the interview published Sunday, he said that US laws like Section 230, which largely allows tech companies to not be legally responsible for the content their users post to platforms, should be amended “to allow for liability in the worst possible cases, so when someone is harmed from this technology we need to have a solution to prevent further harm.”
President-elect Donald Trump’s pick for FTC chair, Brendan Carr, has pushed for limitations to Section 230.
Schmidt said he doesn’t expect much AI regulation in the next four years as Trump’s administration will likely have other priorities. He also said that given that companies are economic agents and have lawyers protecting their intellectual property and goals, “it’s likely to take some kind of a calamity to cause a change in regulation.”