
Dub your reels instantly with Meta AI voice translation, step-by-step guide
Viewers can choose to watch your videos in the original or translated voice.
In creator tools, you can check how many people are watching your translated video.
Creators can choose to disable this feature anytime on their videos.
Meta is planning to expand its AI voice translation feature with more languages in the coming months. The rollout will gradually reach all creators on Facebook and Instagram worldwide.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


News18
18 minutes ago
- News18
Facebook & Instagram Get AI Voice Translation And Lip-Sync
Creators can enable 'Translate your voice with Meta AI,' adjust translation and lip-sync settings, review before publishing, and get notifications when ready Meta has introduced a new feature to make online conversations more inclusive, unveiling AI-powered voice translation for Reels on Facebook and Instagram. This tool, named Meta AI Translations, is free to use and is now available globally, enabling creators to share their content in multiple languages without recording separate versions. Initially, it supports translations from English to Spanish and vice versa, with additional languages to be added soon. The feature allows creators to automatically dub and lip-sync their Reels in another language, broadening their audience reach. How It Works To activate this tool, creators need to select the 'Translate your voice with Meta AI" option. Once enabled, they can adjust settings for translation and lip-syncing, then share their content. Creators can also review translations before publishing. Notifications alert creators when translations are ready, or they can finalise changes via the Professional Dashboard. Viewers will experience translated Reels in their preferred language but can opt to turn off translations for specific languages through the Settings menu. Rising Concerns Over Short-Form Content Amid these improvements, researchers are advising caution regarding the growing influence of short-form content. Recent studies indicate that platforms like Instagram, TikTok, and YouTube Shorts stimulate brain reward pathways similarly to addictive substances like alcohol. A study led by Professor Qiang Wang of Tianjin Normal University, published in NeuroImage, found that frequent viewers of short videos exhibit increased brain activity associated with addiction. The study revealed that excessive video consumption negatively impacts attention, memory, and motivation, and increases the risk of depression and sleep disorders. In China, users spend an average of 151 minutes daily on short videos, with about 96 percent of internet users engaging in this format. Researchers have identified this trend as a public health challenge due to its long-term effects on mental health. view comments Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy. Loading comments...


India Today
20 minutes ago
- India Today
Leaked Grok chats on Google expose wild AI requests, from making drugs and bombs to killing Elon Musk
Elon Musk's artificial intelligence start-up xAI is facing scrutiny after thousands of conversations with its chatbot Grok were found to be publicly available through Google search, exposing everything from mundane tasks to alarming requests. According to a report by Forbes, Grok users who hit the 'share' button on their chats were unknowingly publishing those conversations to a public webpage. Each shared exchange generated a unique URL which, without a disclaimer, was also made visible to search engines such as Google, Bing and has resulted in more than 370,000 Grok conversations being indexed online, ranging from everyday uses like drafting tweets to darker prompts including methods for making fentanyl and bombs, coding malware, and even a detailed plan for assassinating Musk of the indexed chats also revealed sensitive personal information. Forbes apparently reviewed cases where users shared names, private details, passwords, and uploaded files including spreadsheets and images. Others involved medical and psychological queries that users may have assumed were private. Some conversations also contained racist or explicit content, and others directly violated xAI's rules that ban the creation of weapons or content promoting harm. Despite this, Grok's own instructions on making illicit drugs, planning suicide, and developing malware were published via the share function and indexed on revelation comes months after OpenAI faced backlash when some ChatGPT conversations showed up in search results. OpenAI quickly reversed course, with chief information security officer Dane Stuckey calling it 'a short-lived experiment' that risked exposing unintended information. At the time, Musk mocked OpenAI and claimed Grok had no such feature, posting 'Grok ftw' on X. The revelations about Grok and ChatGPT's leaked chats highlight a wider dilemma in how people are beginning to use AI. Increasingly, conversations with chatbots go far beyond drafting emails or writing code, they are becoming deeply personal. Across Reddit and Instagram, users describe turning to ChatGPT for 'voice journaling,' using it as a patient listener for relationship struggles, grief, or daily anxieties. Many say it feels like a safe space where they can unload without judgement. But this intimacy brings risks. OpenAI CEO Sam Altman has openly cautioned against treating ChatGPT as a therapist, noting that such exchanges are not protected by legal or medical privilege. Deleted conversations may still be retrievable, and a Stanford study recently warned that AI 'therapists' often mishandle sensitive situations, sometimes reinforcing harmful stereotypes or offering unsafe has also acknowledged the powerful emotional bonds forming between people and chatbots, describing them as stronger than past attachments to technology. That dependency, he argues, is an ethical challenge that society is only beginning to grapple with. - EndsTrending Reel


Time of India
21 minutes ago
- Time of India
'I have never ...', says Elon Musk on resignation of Kairan Quazi, the engineer who joined SpaceX when he was 14 years
Elon Musk on resignation of Kairan Quazi, the engineer who joined SpaceX when he was 14 years Elon Musk has responded to the resignation of his teen prodigy Kairan Quazi from his space company SpaceX. 'First time I've ever heard of him,' the tech billionaire wrote on X (formerly Twitter). Kairan Quazi joined SpaceX in 2023 when was 14. A 'rare company,' Quazi then said, which didn't use his age as an 'arbitrary and outdated proxy for maturity and ability.' His departure from SpaceX marks a shift from aerospace to the fast-paced world of quantitative finance, where he will join Citadel Securities in New York as a developer. In an interview with Business Insider, Quazi said, 'After two years at SpaceX, I felt ready to take on new challenges and expand my skill set into a different high-performance environment.' "Quant finance offers a pretty rare combination: the complexity and intellectual challenge that AI research also provides, but with a much faster pace," Quazi explained to Business Insider. "At Citadel Securities, I'll be able to see measurable impact in days, not months or years." Who is Kairan Quazi As mentioned above, Kairan Quazi is a 16-year-old prodigy who became the youngest graduate in the 170-year history of Santa Clara University, completing his degree in computer science and engineering at just 14. Soon after, he joined Elon Musk's SpaceX as a software engineer, working on the Starlink project to improve satellite internet accuracy. When Kairan Quazi slammed LinkedIn In 2023, Quazi slammed LinkedIn as 'primitive' for considering him too young for the platform. He then shared an Instagram post with a screenshot of a message from LinkedIn informing him his account had been restricted. 'We're excited by your enthusiasm, energy, and focus. We can't wait to see what you do in the world,' the message read, adding 'Because you currently do not meet the age eligibility criteria to join, we have restricted your account.' 'You are welcome back on the platform once you turn 16 or older,' the message continued. Criticizing the professional networking platform, the 16-year-old then argued that 'tests are not used to measure mastery, but the ability to regurgitate' adding 'Age, privilege, and unconscious (sometimes even conscious) biases are used to gatekeep opportunities'. UPI Transactions Getting Safer: Big Change Explained