Latest news with #VoiceAI

Yahoo
an hour ago
- Business
- Yahoo
Telnyx expands conversational AI stack with new audio, TTS, and integration capabilities
These latest updates make it easier for teams to deliver high-quality, production-ready voice experiences at scale. AUSTIN, TX, July 30, 2025 (GLOBE NEWSWIRE) -- Telnyx, a global leader in communications infrastructure, today announced a wave of platform updates that enhance the core capabilities of its conversational AI stack. The release includes Azure Neural HD text-to-speech, built-in noise suppression, MCP server integration, embeddable AI Agent widgets, and robust tools for versioning and testing. These features give developers more power and flexibility to build high-quality Voice AI Agents at scale while simplifying deployment and improving audio quality across every interaction. One of the most notable updates is the addition of Microsoft Azure Neural HD voices to Telnyx's text-to-speech (TTS) lineup. These ultra-realistic voices offer expressive, human-like delivery and are trained on millions of multilingual utterances. Developers can now toggle between Telnyx-native and Azure Neural HD voices with a single parameter. With transparent, pay-as-you-go pricing and full support for bring-your-own-carrier (BYOC) routing, this update provides premium voice quality and total flexibility across voice experiences. Additionally, Telnyx has refreshed its own text-to-speech portfolio with crisper NaturalHD voices that add richer emotion, handle disfluencies such as 'um' and 'uh,' and even deliver light laughter. Developers can toggle among voice options via the AI Assistant Builder or with a single parameter in the Voice API or TeXML, keeping existing carrier routes and pay-as-you-go pricing so they can align audio quality with call intent and budget without changing their infrastructure. In parallel, Telnyx has enhanced the audio experience of its Voice AI Agents by introducing built-in noise suppression. This feature is designed to make conversations feel smoother and more lifelike, especially in real-world environments like mobile networks or shared spaces. Noise suppression filters out background sounds to ensure clarity, delivering a more engaging and professional voice experience right out of the box. Telnyx has expanded its transcription capabilities with support for Deepgram's Nova 2 and Nova 3 speech-to-text models, bringing low-latency, production-grade transcription to Voice AI Agents. With advanced accuracy in noisy environments and built-in support for over 30 multilingual voices and dialects, Deepgram enables teams to deliver faster, more natural conversations across global use cases. Voice AI Agents now support direct integration with official Model Context Protocol (MCP) servers. This significantly simplifies the process of connecting to public APIs that support the MCP standard. By removing the need for middleware or manual tooling, developers can set up integrations faster, reduce complexity, and unlock a broader range of use cases powered by third-party data and services. On the front-end, businesses can now deploy Voice AI Agents as a widget directly on their websites with a single snippet of code. The new widget functionality enables fully interactive voice agents to go live in minutes without needing additional development lift. This makes it easier than ever to add AI-powered voice support, lead capture, and automation to customer-facing experiences. Finally, Telnyx has rolled out versioning and testing tools for Voice AI Agents to help teams iterate with greater control. Developers can now create and manage multiple versions of an agent, test updates without impacting production, and safely deploy changes using A/B testing or canary releases. This update simplifies prompt engineering and provides a reliable workflow for improving agent behavior while minimizing risk, especially for high-volume or regulated deployments. With these updates, Telnyx continues to invest in a full-stack platform purpose-built for real-time conversational AI. Whether improving audio quality, simplifying integrations, enabling rapid testing, or accelerating deployment, every feature is designed to help teams launch faster and scale with confidence. These releases mark another step towards a more flexible, production-ready infrastructure for building intelligent voice experiences at scale. Experience the benefit of these features in your Voice AI Agents today at About Telnyx: Telnyx delivers global, carrier-grade communications infrastructure combined with advanced conversational AI, providing businesses with reliable, scalable, and intelligent customer interaction solutions. Organizations worldwide choose Telnyx for its robust infrastructure, intuitive tools, and unmatched support. CONTACT: Maeve Sekulovski maeve@ in to access your portfolio


Associated Press
an hour ago
- Business
- Associated Press
Telnyx expands conversational AI stack with new audio, TTS, and integration capabilities
AUSTIN, TX, July 30, 2025 (GLOBE NEWSWIRE) -- Telnyx, a global leader in communications infrastructure, today announced a wave of platform updates that enhance the core capabilities of its conversational AI stack. The release includes Azure Neural HD text-to-speech, built-in noise suppression, MCP server integration, embeddable AI Agent widgets, and robust tools for versioning and testing. These features give developers more power and flexibility to build high-quality Voice AI Agents at scale while simplifying deployment and improving audio quality across every interaction. One of the most notable updates is the addition of Microsoft Azure Neural HD voices to Telnyx's text-to-speech (TTS) lineup. These ultra-realistic voices offer expressive, human-like delivery and are trained on millions of multilingual utterances. Developers can now toggle between Telnyx-native and Azure Neural HD voices with a single parameter. With transparent, pay-as-you-go pricing and full support for bring-your-own-carrier (BYOC) routing, this update provides premium voice quality and total flexibility across voice experiences. Additionally, Telnyx has refreshed its own text-to-speech portfolio with crisper NaturalHD voices that add richer emotion, handle disfluencies such as 'um' and 'uh,' and even deliver light laughter. Developers can toggle among voice options via the AI Assistant Builder or with a single parameter in the Voice API or TeXML, keeping existing carrier routes and pay-as-you-go pricing so they can align audio quality with call intent and budget without changing their infrastructure. In parallel, Telnyx has enhanced the audio experience of its Voice AI Agents by introducing built-in noise suppression. This feature is designed to make conversations feel smoother and more lifelike, especially in real-world environments like mobile networks or shared spaces. Noise suppression filters out background sounds to ensure clarity, delivering a more engaging and professional voice experience right out of the box. Telnyx has expanded its transcription capabilities with support for Deepgram's Nova 2 and Nova 3 speech-to-text models, bringing low-latency, production-grade transcription to Voice AI Agents. With advanced accuracy in noisy environments and built-in support for over 30 multilingual voices and dialects, Deepgram enables teams to deliver faster, more natural conversations across global use cases. Voice AI Agents now support direct integration with official Model Context Protocol (MCP) servers. This significantly simplifies the process of connecting to public APIs that support the MCP standard. By removing the need for middleware or manual tooling, developers can set up integrations faster, reduce complexity, and unlock a broader range of use cases powered by third-party data and services. On the front-end, businesses can now deploy Voice AI Agents as a widget directly on their websites with a single snippet of code. The new widget functionality enables fully interactive voice agents to go live in minutes without needing additional development lift. This makes it easier than ever to add AI-powered voice support, lead capture, and automation to customer-facing experiences. Finally, Telnyx has rolled out versioning and testing tools for Voice AI Agents to help teams iterate with greater control. Developers can now create and manage multiple versions of an agent, test updates without impacting production, and safely deploy changes using A/B testing or canary releases. This update simplifies prompt engineering and provides a reliable workflow for improving agent behavior while minimizing risk, especially for high-volume or regulated deployments. With these updates, Telnyx continues to invest in a full-stack platform purpose-built for real-time conversational AI. Whether improving audio quality, simplifying integrations, enabling rapid testing, or accelerating deployment, every feature is designed to help teams launch faster and scale with confidence. These releases mark another step towards a more flexible, production-ready infrastructure for building intelligent voice experiences at scale. Experience the benefit of these features in your Voice AI Agents today at About Telnyx: Telnyx delivers global, carrier-grade communications infrastructure combined with advanced conversational AI, providing businesses with reliable, scalable, and intelligent customer interaction solutions. Organizations worldwide choose Telnyx for its robust infrastructure, intuitive tools, and unmatched support. Maeve Sekulovski [email protected]


Entrepreneur
15 hours ago
- Business
- Entrepreneur
ElevenLabs Rolls out India Data Residency for Voice AI Compliance and Speed
ElevenLabs is already working with several Indian platforms, including Meesho, Apna, and 99acres, mainly in conversational AI and customer engagement applications You're reading Entrepreneur India, an international franchise of Entrepreneur Media. ElevenLabs, an AI audio company, has launched a dedicated India Data Residency solution, enabling enterprises operating in or serving the Indian market to deploy its Voice AI models in full compliance with local data protection regulations. With growing scrutiny over data localisation, the new offering ensures that customer voice data is hosted entirely within India-based infrastructure, meeting the country's data residency mandates. The service is designed to support enterprise-grade use cases like multilingual customer support and real-time voice interfaces, particularly in regulated sectors such as financial services, telecom, insurance, e-commerce, and education. Key features of the India Data Residency initiative include local hosting, an isolated computing environment separated from global systems, and integration with ElevenLabs' existing compliance stack including GDPR-readiness, SOC 2 certification, HIPAA-ready frameworks, and Zero Retention controls. The company also claims improved latency for Indian users and access to an expanded multilingual voice library inclusive of Indian voices. "India has rapidly become one of our largest and fastest-growing markets outside the U.S., driven by enterprises, creators, and developers embracing voice AI at scale," said Siddharth Srinivasan, GTM at ElevenLabs. "With India Data Residency, we're deepening our commitment to the market by offering world-class voice quality tailored for India's languages and accents, while ensuring data stays secure, compliant, and locally hosted." ElevenLabs is already working with several Indian platforms, including Meesho, Apna, and 99acres, mainly in conversational AI and customer engagement applications. The rollout aims to make it easier for companies to adopt Voice AI without facing cross-border data transfer issues or latency challenges.


Forbes
2 days ago
- Business
- Forbes
What Makes An AI Voice Persuasive? Science Has Answers
Voice AI agents are growing in popularity. As businesses transform their contact centers to include AI agents, many are struggling with a complex question: What should those agents sound like? Businesses have more options than they could ever consider. There are voices based on real-life voice actors, whose recordings can be used to have an AI tool say just about anything. And there are synthetic, entirely AI voices. In the past, those may have been easier to distinguish. But these days, some AI voices sound just as natural as real ones. Digging into this question can seem daunting -- not only because of the array of possibilities, but also because it involves lingo most people have never had to think about. Here's a guide for you to make sense of this, and simple ways to help guarantee success. From 'sociophenetics' to 'illocutions' Just weeks ago, a group of researchers in the United States and China published a study titled AI Voice Matters: AI's Pragmatics and Sociophonetics in Persuasion. They experimented to determine how various AI voices affected potential customers in phone calls. Their conclusions show the many moving pieces to consider. For example, they looked at 'coquetry,' a term for a playful, often flirtatious style in how an AI voice can be designed. They found that it didn't help. In fact, the opposite occurred. 'We observed a direct negative effect of a coquetry voice on persuasion, particularly for male chatbots,' the team from the Beijing University of Technology, the University of Texas at Austin, and the University of Memphis reported. Meanwhile, 'The stern voice was found to be similarly effective as the gentle voice. Moreover, a male stern voice prompted customers to ask more questions, thereby increasing its persuasive power.' How about age? 'There was no significant difference between the persuasive effects of young and old voices, the group (Lin Jiaa, Xiayu Hub, Yifan Yuc, and Huigang Liang) wrote. 'Additionally, an old voice's persuasion effect is partly through extended conversation duration' -- yet another element to consider. They also examined the type of language structure these voices used, and whether customers responded differently when various structures were use by different voices. 'An affirmative sentence, particularly in female voices, led to an increase in customer inquiries and in turn enhanced the overall persuasive effect.' Rhetorical sentences and questions were less effective, the study added, referring to all of these as 'illocutionary acts.' Ultimately, the right voices for your organization will also depend on factors specific to your brand. Why a unified model is essential Because my work at Nextiva involves helping organizations transform their CX (customer experience), I go through this process with clients. I explain to them that, in assessing which ones work best, they need to keep in mind the multiple end goals. Sure, ideally each interaction will lead to a purchase or a resolved issue. But you also want each interaction to leave the customer with a lasting, positive impression of the brand, even if those other goals were not achieved. Testing out voices requires a comprehensive system. You need to get real-time information on how customers respond in multiple scenarios. Ultimately, you may arrive at a point of extreme personalization, in which specific voices automatically come up for each customer based on their history. For all this to happen, your company needs to gather all of the information it can about customers into a single record. That's why unified customer experience management is vital. An AI-powered UCXM platform consolidates customer interactions, gets rid of silos and provides a holistic view of the customer. Your entire company operates within the same platform, so any new information relevant to each customer pops up immediately (with privacy and compliance covered of course). The tools highlight key insights at a glance. In this era, that kind of omnichannel communication system is a necessity. A UCXM quickly helps brands develop their 'voices.' That used to be a figurative term referring to the kinds of language organizations use; now, it's become literal as well. As you test out voices to build into your UCXM, you'll most likely experience some stumbles. That's OK. With technology advancing so quickly, there are always new hurdles to overcome. But you'll learn from them, and keep moving forward. After all, every fall is a chance to get back up and grow stronger.


Entrepreneur
2 days ago
- Business
- Entrepreneur
Is Voice AI Becoming India's Next Digital Backbone?
According to NASSCOM, the Indian voice AI market is projected to reach USD 1.82 billion by 2030 Opinions expressed by Entrepreneur contributors are their own. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. Voice AI is quickly becoming the new battleground in shaping the future of human-machine interactions. The recent USD 45 million acquisition of a voice AI startup Play AI by Meta brought renewed attention to the space. But why the Sudden Rush in India? "There's a rush towards voice tech startups because the country's vast linguistic diversity and rising demand for high-quality, real-time voice translation have made voice AI a natural solution," explains Ganesh Gopalan, Co-founder & CEO of Gnani AI. "With the rapid adoption of smartphones and consumers increasingly expecting seamless, human-like interactions, voice is emerging as the preferred interface for digital engagement." According to NASSCOM, the Indian voice AI market is projected to reach USD 1.82 billion by 2030. While India has 22 official languages, it is home to over 400 living languages. English, often assumed to be the digital default, is neither the first spoken nor written language for the majority of Indians. Until now, much of emerging tech has catered only to metro markets and English-speaking audiences. Voice-led AI startups, however, are disrupting that trend. Indian entrepreneurs are now tapping deeper into Tier 2/3 markets, targeting vernacular language speakers and building inclusive solutions for non-English and non-Hindi audiences. Where the action is Gopalan notes that sectors like banking, finance, and insurance (BFSI) have seen the most traction. "Voice AI is being used for customer support, lead qualification, EMI collections, policy renewals, and reminders. This growth ties closely to India's digital inclusion push, enabling businesses to engage a much wider audience in their native languages." India is also becoming a strategic growth market for global Voice AI firms. ElevenLabs, for instance, recorded a 50 per cent growth in usage in India between November and January, making the country its fastest-growing market globally. Siddharth Srinivasan, GMT–India at ElevenLabs, observes, "India was always a market waiting for a solution in this space. We're inherently multilingual, most of us are bilingual or trilingual. The need for high-quality, real-time voice solutions has always existed." Still early days? But is this rush solving meaningful, scalable problems, or are we still in an experimentation phase? Arjun Malhotra, General Partner at Good Capital, believes the sector is at "an interesting middle ground." "In BFSI, voice AI is solving real operational challenges around lending and collections at scale. Companies are successfully reaching lakhs of customers simultaneously. However, the broader ecosystem is still evolving. While enterprise applications have found clear product-market fit in certain use cases, consumer applications remain largely in the discovery phase." From an investor's perspective, technical differentiation is key. "Given the competitive landscape, we evaluate whether startups are building foundational technology or merely implementing existing solutions," Malhotra explains. "Companies that differentiate on the core mechanics of voice AI rather than just the application layer have stronger moats." He also emphasises the importance of domain expertise. "Voice AI requires deep technical expertise combined with domain knowledge. We look for teams that understand both the technology's limitations and the specific market needs they're addressing." The bigger question still remains. Can Voice AI become foundational digital infrastructure? Malhotra thinks the answer depends on the use case. "In enterprise contexts, we're seeing voice AI evolve from a feature (like automated calling) to a platform that can handle complex workflows and multiple touchpoints." The opportunity, he adds, lies in companies that can expand beyond single-use cases and integrate deeply into business workflows. What's next for voice AI in india? Looking ahead, Malhotra sees the next 24 months as pivotal. "Voice AI will likely become deeply embedded in workflows rather than remain a standalone tool. Companies that can demonstrate this workflow integration will command premium valuations." He also foresees the emergence of breakthrough consumer applications such as voice companions, therapy, and coaching tools where Indian startups could potentially create globally competitive products, especially given the market's natural comfort with voice-based interactions. Finally, Malhotra believes we'll see the rise of foundational voice AI infrastructure startups that provide the "picks and shovels" enabling the entire ecosystem.