ChatGPT Plus users get a sneak peek of OpenAI’s Advanced Voice Mode

Akarsh Rasik
By Akarsh Rasik
2 Min Read
Highlights
  • Advanced Voice Mode allows for more seamless and lifelike interactions, enabling users to enjoy real-time conversations that feel natural and engaging.
  • The AI can detect and respond to users' emotions, creating a personalized and empathetic interaction that enhances the user experience.
  • OpenAI has implemented strict privacy measures by using only four preset voices and has established guardrails to block violent or copyrighted content, ensuring a safe environment.

OpenAI just announced Advanced Voice Mode for its premium ChatGPT Plus subscribers, in order to provide users with a much more natural and engaging user experience while conversing. One of the advanced characteristics this gives is real-time conversations. Users can cut in any time and feel and respond to emotional cues.

Key features of advanced Voice Mode:

  1. Natural Conversations: The mode enables more fluid and human-like interactions with ChatGPT.
  2. Emotional Awareness: It detects and appropriately responds to user emotions.
  3. Privacy Assurance: Users will hear responses from only four preset voices, ensuring privacy and consistency. The system is designed to prevent any outputs outside these preset voices.

OpenAI is working on the integrations of videos and screen-sharing for ChatGPT and plans to launch at some time.

Rollout and access

Advanced Voice Mode is now in its alpha stage, according to the company, which will make it available to some ChatGPT Plus users gradually. Customers who would be selected for this testing period would be contacted through email with instructions and in-app notifications. It is planned to be carried out more broadly between September and November 2024.

Safety and quality measures

OpenAI stands for safety and quality of voice interactions and has been testing extensively, with evaluation by more than 100 independent expert evaluators in 45 languages. It includes measures for ensuring that inappropriate content is not generated, and also confirms the model is able to work only within the limitations of the four preset voices.

Future updates

OpenAI plans to release a detailed report on the capabilities, limitations, and safety evaluations of GPT-4o’s voice features in early August. This report will provide deeper insights into how the new technology performs and its potential for improving user experiences. Stay tuned for more updates as OpenAI continues to enhance its AI offerings with innovative features.

TAGGED:
SOURCES:OpenAI
Share This Article
Leave a Comment
Highlights
  • Advanced Voice Mode allows for more seamless and lifelike interactions, enabling users to enjoy real-time conversations that feel natural and engaging.
  • The AI can detect and respond to users' emotions, creating a personalized and empathetic interaction that enhances the user experience.
  • OpenAI has implemented strict privacy measures by using only four preset voices and has established guardrails to block violent or copyrighted content, ensuring a safe environment.

OpenAI just announced Advanced Voice Mode for its premium ChatGPT Plus subscribers, in order to provide users with a much more natural and engaging user experience while conversing. One of the advanced characteristics this gives is real-time conversations. Users can cut in any time and feel and respond to emotional cues.

Key features of advanced Voice Mode:

  1. Natural Conversations: The mode enables more fluid and human-like interactions with ChatGPT.
  2. Emotional Awareness: It detects and appropriately responds to user emotions.
  3. Privacy Assurance: Users will hear responses from only four preset voices, ensuring privacy and consistency. The system is designed to prevent any outputs outside these preset voices.

OpenAI is working on the integrations of videos and screen-sharing for ChatGPT and plans to launch at some time.

Rollout and access

Advanced Voice Mode is now in its alpha stage, according to the company, which will make it available to some ChatGPT Plus users gradually. Customers who would be selected for this testing period would be contacted through email with instructions and in-app notifications. It is planned to be carried out more broadly between September and November 2024.

Safety and quality measures

OpenAI stands for safety and quality of voice interactions and has been testing extensively, with evaluation by more than 100 independent expert evaluators in 45 languages. It includes measures for ensuring that inappropriate content is not generated, and also confirms the model is able to work only within the limitations of the four preset voices.

Future updates

OpenAI plans to release a detailed report on the capabilities, limitations, and safety evaluations of GPT-4o’s voice features in early August. This report will provide deeper insights into how the new technology performs and its potential for improving user experiences. Stay tuned for more updates as OpenAI continues to enhance its AI offerings with innovative features.

TAGGED:
SOURCES:OpenAI
Share This Article
Leave a Comment