logo
#

Latest news with #speechtranslation

Google Meet is breaking down language barriers with real-time translation
Google Meet is breaking down language barriers with real-time translation

Android Authority

time20-05-2025

  • Android Authority

Google Meet is breaking down language barriers with real-time translation

Edgar Cervantes / Android Authority TL;DR Google Meet is rolling out a speech translation feature. Speech translation provides near real-time translation that maintains voice, tone, and expression. It will be available to Google AI Pro and Ultra subscribers first in beta. The day is here, Google is kicking off its I/O event today. While this event is meant for developers, the company tends to make a fair number of announcements that also relate to the public. For instance, Google is now rolling out a new feature for Meet that will come first to consumers. If you use Google Meet to talk with friends or loved ones over video, you'll be able to take advantage of a new feature starting today. This new feature is speech translation, and it's capable of providing low-latency, near real-time translation of spoken words. So if you speak English and you want to talk with someone who speaks Spanish, speech translation will convert those words into your preferred language. Near real-time translation is impressive, but what's also impressive about this feature is that it's capable of retaining the speaker's voice, tone, and expression. As a result, the conversation still comes off as natural. Initially, this feature will only support English and Spanish, but Google plans to add more languages in the next few weeks. Unfortunately, speech translation won't be available to everyone right away. The tech giant is rolling it out in beta to Google AI Pro and Ultra subscribers first. As for Workspace customers, Google says early testing will begin sometime this year. The feature is also coming to Google Beam, previously known as Project Starline. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

Google Meet is getting real-time speech translation
Google Meet is getting real-time speech translation

TechCrunch

time20-05-2025

  • Business
  • TechCrunch

Google Meet is getting real-time speech translation

Google announced at Google I/O 2025 that it's bringing real-time speech translation to Google Meet. The feature leverages a large language audio model from Google DeepMind to allow for a natural, free-flowing conversation with someone in a different language, Google says. Speech translation in Meet translates spoken words into the listener's preferred language in real time. Voice, tone, and expression are all preserved in the translation. The tech giant says the new feature has a variety of use cases. For instance, it can be used for English-speaking grandchildren talking to their Spanish-speaking grandparents. Or, it could be used by companies that operate across different regions to allow global colleagues to connect and chat in real-time. The latency for speech translation is very low, according to Google, allowing for multiple people to chat together, which the company says hasn't been possible until now. Image Credits:Google When the person on the other side speaks, you will still faintly hear their original voice, with the translated speech overlaid on top. Speech translation in Google Meet will begin rolling out to consumer AI subscribers in beta starting Tuesday. The feature will first be available in English and Spanish, with more languages coming in the next few weeks, including Italian, German, and Portuguese. Google says it's building out the speech translation in Meet for businesses, with early testing coming to Workspace customers this year.

Google Meet can translate what you say into other languages
Google Meet can translate what you say into other languages

The Verge

time20-05-2025

  • Business
  • The Verge

Google Meet can translate what you say into other languages

Google is bringing speech translation to Meet. During I/O on Tuesday, Google revealed a new Gemini-powered feature that can translate what you say into your conversation partner's preferred language. Google says the AI-generated translation will preserve the sound of your voice, tone, and expression. In a demo shown by Google, an English speaker joins a call with a colleague who speaks Spanish. Once their colleague turns on Gemini's speech translation, Meet begins dubbing over what they're saying with an AI-generated English translation that includes all their vocal inflections — and vice versa. Microsoft Teams similarly launched an AI translation feature in a preview earlier this year. For now, Meet can only translate between English and Spanish, but Google plans on adding support for Italian, German, and Portuguese in the 'coming weeks.' The feature is rolling out now to subscribers, and Google will also bring the feature to enterprises later this year.

AI translation startup led by UW computer science whiz enables ‘superhuman hearing capabilities'
AI translation startup led by UW computer science whiz enables ‘superhuman hearing capabilities'

Geek Wire

time13-05-2025

  • Business
  • Geek Wire

AI translation startup led by UW computer science whiz enables ‘superhuman hearing capabilities'

GeekWire's startup coverage documents the Pacific Northwest entrepreneurial scene. Sign up for our weekly startup newsletter , and check out the GeekWire funding tracker and venture capital directory . Shyam Gollakota. A new Seattle startup is taking speech translation to a whole new dimension. Hearvana, founded this year by University of Washington computer science researchers, translates speech from multiple speakers in real time — while preserving their spatial positions and vocal identities. The idea is that you could be in a noisy setting surrounded by people speaking different languages, and still understand what's being said. 'Our system transforms your entire auditory space into your own language, preserving the unique voices of speakers, their 3D spatial positions, and doing it all in real time, even as people move,' Shyam Gollakota, co-founder of Hearvana, wrote on LinkedIn. Gollakota, a leading inventor who heads up the UW's Mobile Intelligence Lab, told GeekWire that the company is 'creating AI breakthroughs that are shaping the future of sound.' 'Our AI algorithms enable on-device superhuman hearing capabilities and will be part of billions of earbuds, hearing aids and smartphones,' he said. 'It is an exciting time.' Hearvana says its translation features go beyond what's available on existing wearables such as Meta's Ray-Ban glasses. The startup's underlying tech — dubbed 'spatial speech translation' — was detailed in a research paper published last month. The study tested translation from French, German, and Spanish into English. It outperformed baseline systems in user studies and participants praised its translation accuracy, speaker fidelity, and spatial realism. Hearvana is being incubated at the AI2 Incubator in Seattle. 'Hearvana is my favorite kind of startup as it addresses a familiar pain point — we all struggle to hear in noisy settings like a restaurant or a party — with deep AI technology,' said Oren Etzioni, technical director and partner at AI2 Incubator. Etzioni, the former CEO of the Allen Institute for AI, called Gollakota a 'world-class computer scientist.' Hearvana co-founder Malek Itani. Gollakota has a track record of turning research into startups. He previously co-founded Sound Life Sciences, a UW spinout that developed an app to monitor breathing that was acquired by Google in 2022. He's also the co-founder of Wavely Diagnostics, which uses a smartphone app to detect ear infections. Gollakota last year won a $100,000 award as one of six researchers honored as part of this year's Infosys Prize. His research focuses on wireless tech, battery-free devices, WiFi sensing and imaging, medical diagnostics via smartphones, and more. Malek Itani, a research assistant and PhD student at the UW's computer science school, is a co-founder of Hearvana. Itani was an intern at Meta, where he worked on smart glasses. Gollakota and Itani published research last year on a headphone prototype that uses AI to create a 'sound bubble' and can learn the distance for each sound source in a room.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store