Connect with us

Tech

OpenAI Asks ChatGPT-4o Users Not To Develop Feelings For It Due To Its Human-Like Interactions

Published

on

OpenAI Asks ChatGPT-4o Users Not To Develop Feelings For It Due To Its Human-Like Interactions

ChatGPT-4o is the latest iteration of OpenAI’s chatbot lineup, and as you would expect, the firms is having some concerns of how users engage with it. Given that the latest version can exhibit human-like behavior and responses, the artificial intelligence firm is concerned that users will develop feelings for the chatbot.

ChatGPT-4o offers quicker replies and even a new voice feature that emulates human speech, which is precisely what is concerning OpenAI

While the billion-dollar startup continues to refine its product, it cannot help but notice some patterns from ChatGPT-4o users. The launch of the new chatbot was aimed to deliver an experience similar to talking to a human, but it appears that OpenAI underestimated the emotional connection that can be created between users and the program. The company has highlighted its findings below.

“During early testing, including red teaming and internal user testing, we observed users using language that might indicate forming connections with the model. For example, this includes language expressing shared bonds, such as “This is our last day together.” While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time. More diverse user populations, with more varied needs and desires from the model, in addition to independent academic and internal studies will help us more concretely define this risk area.

Human-like socialization with an AI model may produce externalities impacting human-to-human interactions. For instance, users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships. Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions.”

Developing feelings for ChatGPT-4o is detrimental for various instances. The primary one is that previously, users would disregard any hallucinations since the previous iterations of the chatbot made them seem more like an AI program rather than a person. Now, as the program transitions towards delivering a near-human-like experience, anything that it says could be accepted without further questioning.

By noticing these patterns, OpenAI will now monitor how people develop emotional bonds with ChatGPT-4o and tweak its systems accordingly. The company should also add a disclaimer at the beginning so that users do not fall in love with it because, in the end, it is an artificial intelligence program.

News Source: OpenAI

Share this story

Facebook

Twitter

Continue Reading