OpenAI warns ChatGPT users against getting emotionally attached to the chatbot

OpenAI recently conducted a safety review of GPT-4o, which found out that ChatGPT Voice Mode users might “form social relationships with the AI” and look for companionship. The findings were published as part of a safety review report titled “GPT-4o System Card”, which outlines the safety work carried out before OpenAI made GPT-4o available to the general public.

While the safety challenges identified by OpenAI include risks like the AI model giving erotic and violent responses, generating disallowed or producing biased content, one of the risks associated suggests that users might “form social relationships with the AI” and thereby reduce the need for human interaction.

The company also noted that “extended interaction with the model might influence social norms.” The risks in the report, however, apply only to the new advanced Voice Mode that is capable of mimicking human speech and even conveying emotions. OpenAI also revealed that the team responsible for red-teaming GPT-4o also found instances of humans getting attached and forming emotional bonds with the chatbot during internal trials

It also addressed some copyright issues that affected the company and the overall development of large language models and said that GPT-4o is capable of refusing requests for copyrighted content and generating output with music.

While there is currently no solution to the problem other than reducing the amount of time you use the chatbot’s Voice Mode, OpenAI said that it “intends to further study the potential for emotional reliance” and how the “audio modality may drive behaviour

Simple Contact Form
Please enable JavaScript in your browser to complete this form.
Name

Leave a Comment

Translate »