OpenAI has launched the eagerly awaited "Advanced Voice Mode" (AVM) for ChatGPT, now available in an alpha release to a select group of users.
First announced and demonstrated in May, AVM enables real-time conversations with ChatGPT through an advanced text-to-speech module.
This feature replicates the dynamics of human conversation, with the artificial intelligence (AI) model responding in natural human speech patterns. Users can even interrupt it mid-sentence, with the system maintaining context throughout the interaction.
Did you know?
Want to get smarter & wealthier with crypto?
Subscribe - We publish new crypto explainer videos every week!
What is Fantom? | Animated FTM Explainer
OpenAI emphasized its commitment to user privacy and security in an announcement on X, explaining:
We tested GPT-4o's voice capabilities with 100+ external red teamers across 45 languages. To protect people's privacy, we've trained the model to only speak in the four preset voices, and we built systems to block outputs that differ from those voices. We've also implemented guardrails to block requests for violent or copyrighted content.
The company has opted for a limited alpha release to further assess AVM's performance and address any safety issues, with plans to gradually increase user access and make the feature available to all Plus subscribers by the fall.
This careful approach to releasing AVM highlights OpenAI's dedication to safety and innovation in AI technology.
In other news, OpenAI's GPT models have recently been incorporated into neurotech firm Synchron's brain-computer interface (BCI), revolutionizing communication for patients with severe neurological disorders.