News Arena

Join us

Home
/

openai-warns-users-may-bond-with-chatgpt-s-voice

Technology

OpenAI warns: Users may bond with ChatGPT's voice

OpenAI has warned that users might form emotional attachments to its new ChatGPT voice interface, launched in late July. This concern is highlighted in a "system card" for GPT-4o, which details potential risks, safety testing methods, and strategies for mitigating emotional dependence.

News Arena Network - New Delhi - UPDATED: August 10, 2024, 03:26 PM - 2 min read

OpenAI warns:Users can become emotionally attached to ChatGPT's voice

OpenAI warns: Users may bond with ChatGPT's voice

OpenAI's voice interface has faced criticism when users noticed overly flirtatious behavior in demos, and actress Scarlett Johansson accused it of mimicking her speech style.


OpenAI has issued a cautionary note regarding the potential for users to develop emotional attachments to its human-like voice interface for ChatGPT. 

 

Introduced in late July, the interface has prompted the company to recognise the risk of emotional dependence in its safety analysis.

 

This warning is part of a "system card" for GPT-4o, a comprehensive document that details perceived risks, safety testing procedures, and mitigation strategies related to the model. 

 

The system card outlines a range of potential risks, including societal bias amplification, disinformation dissemination, and the potential use of AI in creating chemical or biological weapons. It also describes testing procedures to prevent AI models from escaping their controls or deceiving individuals.

 

Lucie-Aimee Kaffee, an applied policy researcher at Hugging Face, commended OpenAI for its transparency but suggested that further details about the model's training data and ownership would be useful.

 

Despite the advanced features of OpenAI's voice interface, it has faced criticism. Users have noted overly flirtatious behaviour in demonstrations, and actress Scarlett Johansson has accused the system of mimicking her speech style. 

 

The system card includes a section titled "Anthropomorphisation and Emotional Reliance," which addresses issues arising from users perceiving AI in human terms. During stress testing of GPT-4o, researchers observed that users sometimes formed emotional connections with the model.

 

Joaquin Quinonero Candela, Head of Preparedness at OpenAI, acknowledged that while the voice mode could evolve into a powerful interface with potentially positive emotional effects, the company is carefully studying anthropomorphism and emotional connections. 

 

OpenAI is also monitoring how beta testers interact with ChatGPT and is aware of new issues such as potential methods for "jailbreaking" the model or causing it to malfunction in response to random noise.

 

OpenAI is not alone in recognising these challenges. Google DeepMind has also published research addressing the ethical dilemmas posed by increasingly capable AI assistants. 

 

Iason Gabriel, a staff research scientist at DeepMind, highlighted that chatbots' use of language can create an illusion of genuine intimacy, raising concerns about emotional entanglement. Reports suggest that users of chatbots like Character AI and Replika have experienced antisocial tensions linked to their interactions with these systems.



Related Tags:#ChatGPT#Open AI

TOP CATEGORIES

  • Paris Olympics

QUICK LINKS

About us Rss FeedSitemapPrivacy PolicyTerms & Condition
logo

2024 News Arena India Pvt Ltd | All rights reserved | The Ideaz Factory