logo
#

Latest news with #GoogleLabs

Google Gemini AI: New feature introduced, with image-to-video now available in app
Google Gemini AI: New feature introduced, with image-to-video now available in app

Express Tribune

time5 days ago

  • Express Tribune

Google Gemini AI: New feature introduced, with image-to-video now available in app

Google has launched a new feature in its Gemini app that allows users to generate videos from static images, powered by its advanced Veo 3 model. The image-to-video generation tool is available to Google's AI Pro and AI Ultra subscribers, marking the first time the feature has been extended beyond the AI Ultra tier. Previously exclusive to AI Ultra users, the Veo 3 model is now being made more widely accessible to those with a Google AI Pro subscription. That dog in your photo? He's got something to say. 🐶 Turn your images into eight-second video clips with sound effects and speech in the @GeminiApp and Flow from @GoogleLabs. This feature uses Veo 3 to generate motion that reflects real world physics and includes a new… — Google AI (@GoogleAI) July 10, 2025 The tool enables users to create eight-second video clips at 720p resolution, with audio that is synced to the generated video. However, Google has limited the video output to a 16:9 landscape format, meaning it may not be ideal for social media platforms, unlike TikTok's AI Alive feature. To use the feature, Gemini users can select the 'video' option under the 'tools' prompt on the app. The addition of image-to-video generation is also integrated into Google's Flow, its AI filmmaking app, which has expanded to 75 new countries today. Since I/O in May, you've created 40M+ videos with Veo 3! Now our new photo to video feature in the @Geminiapp lets you create clips inspired by the world around you. Here's how I imagine our resident dino Stan roams the Google campus when we're not looking:) Ultra/Pro… — Sundar Pichai (@sundarpichai) July 10, 2025 This feature is rolling out for desktop users first, with mobile access expected by the end of the week. Google's AI Pro subscription costs $20 per month, while the more advanced AI Ultra plan is available for $250 per month.

Google Gemini can now turn your photos into videos with audio: Check how
Google Gemini can now turn your photos into videos with audio: Check how

Business Standard

time5 days ago

  • Business Standard

Google Gemini can now turn your photos into videos with audio: Check how

Google has introduced a new feature for Gemini AI that allows users to animate still photos into eight-second videos with sound, powered by the Veo 3 video generation model. This tool, which adds background noise, ambient audio, or even spoken dialogue, is now rolling out in select regions, including India, for Gemini Advanced Ultra and Pro subscribers. While currently available through the web interface, Google has announced that mobile support will follow later in the week. Turning still into video with sound: How it works With this new tool, users can upload a photo, describe the desired motion, and optionally include prompts for audio effects or narration. Gemini then generates a short 720p video in MP4 format, using a 16:9 landscape layout. Josh Woodward, Vice President of the Gemini app and Google Labs, recently demonstrated the feature on X (formerly Twitter), sharing how a child's drawing was turned into a short animated clip with synchronised sound. 'Still experimental, but we wanted our Pro and Ultra members to try it first! It's really fun to take kindergarten artwork and make it come to life with sound,' Woodward wrote. To maintain transparency, all videos include a visible 'Veo' watermark in the bottom-right corner and a hidden SynthID digital watermark created by Google DeepMind. This invisible signature helps verify that the content was generated by AI. Here are the steps to use Gemini AI's new photo-to-video feature: Click on the 'tools' icon in the prompt bar. Choose the 'video' tool from the list. Upload a still image you want to animate. Enter a description of the desired motion. Add optional audio cues (e.g., sound effects, dialogue, ambient sounds). Gemini will generate a short 720p MP4 video in 16:9 format. Audio will automatically sync with the visuals. Google Veo 3: What is new? First unveiled at Google I/O, Veo 3 is Google's most sophisticated video model to date. It can generate realistic visuals and synchronised sound from either text or image-based prompts. A Google blog post explains: 'Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing. It's great at understanding; you can tell a short story in your prompt, and the model gives you back a clip that brings it to life.'

Google AI mode rolled out: Top features for students to learn faster, smarter
Google AI mode rolled out: Top features for students to learn faster, smarter

Time of India

time6 days ago

  • Time of India

Google AI mode rolled out: Top features for students to learn faster, smarter

The way we search for information is changing with developments in Artificial Intelligence (AI). Google 's AI Mode has officially launched for all users, bringing a fresh approach to how we find and learn information online. This new feature, built on the Gemini 2.5 system, marks a major shift in search technology. After just weeks of testing in Search Labs, AI Mode is now available to English-language users across India and beyond, and you no longer need to sign up for Google Labs to access it. In the coming days, as it is rolled out to more users, the feature will be available in both search and search bar in the google app. This change makes the feature accessible to everyone, moving beyond the traditional list of blue links. According to reports, the success of the initial experimental launch prompted Google to fast-track the broader rollout. Users consistently praised the speed and quality of responses, which led the company to make this powerful feature accessible to everyone without barriers. Traditional search often leaves us jumping between multiple tabs, piecing together information from various sources, and struggling to find complete answers to complex questions. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 5 Books Warren Buffett Wants You to Read In 2025 Blinkist: Warren Buffett's Reading List Undo AI Mode addresses these problems by providing clear, helpful responses that understand what you're really looking for. Try these 7 useful features to improve how you learn. Ask multiple queries simultaneously Instead of breaking down your query into multiple searches, ask everything at once. For example, rather than searching separately for "indoor activities," "kids age 6-8," and "hot weather," ask: "What are good indoor activities for energetic 6 and 8-year-olds when it's too hot to go outside and we don't have much space or special equipment?" AI mode processes all these parameters together, saving time and providing more targeted results. Follow up with your searches Take advantage of the conversation memory feature. After getting an initial response, ask for: More specific details about certain points Examples or step-by-step instructions Alternative approaches or solutions Clarification on complex concepts The system remembers your original question, so you don't need to repeat the context. Use voice commands to go hands-free Use voice commands when you're cooking, commuting, or doing other activities. This is particularly useful for: Getting recipe instructions while cooking Learning about topics during commutes Asking questions while exercising or walking Accessing information when your hands are busy Upload images for visual learning Use Google Lens integration to: Identify plants, animals, or objects you encounter Get information about landmarks or artwork Understand diagrams or charts Translate text in images Learn about historical artefacts or scientific specimens Customise your learning path Since AI mode provides organised, synthesised information, use it to: Start with broad topic overviews Then drill down into specific aspects Ask for real-world applications Request examples that relate to your situation Get step-by-step guidance for practical tasks Maximise the speed benefits AI mode uses "query fan-out" technology to process questions quickly. To get the best results: Be specific about what you want to learn Include context about your current knowledge level Ask for information in the format you prefer (lists, explanations, examples) Specify if you want beginner, intermediate, or advanced information Make the most of source integration AI mode pulls information from multiple sources and presents it in a unified response. Use this feature by: Checking the provided source links for deeper information Asking for additional sources if you need more references Requesting different perspectives on controversial topics Asking for the most recent information on rapidly changing subjects Ready to navigate global policies? Secure your overseas future. Get expert guidance now!

Walk through a trippy mirrored maze in Rockefeller Center this month
Walk through a trippy mirrored maze in Rockefeller Center this month

Time Out

time07-07-2025

  • Entertainment
  • Time Out

Walk through a trippy mirrored maze in Rockefeller Center this month

Walking through Midtown can feel like a maze, but at this trippy mirrored art installation, that's exactly the point. You'll lose your bearings and find yourself again inside this immersive art installation where space feels endless and ever-changing. Called Reflection Point, the piece by Brooklyn-based artist duo Wade and Leta (Wade Jeffree and Leta Sobierajski) is on view for free at Rockefeller Center until July 20. Take a moment to stroll through its shifting pathways and definitely snap a few photos while you're there. As you walk through the colorful maze, you'll spot bold, graphic shapes that function as doors, welcoming visitors to push through and uncover new routes for some playful exploration. Color guides the eye through certain passages, while reflection and refraction conceal others, inviting constant reevaluation of direction and experience. View this post on Instagram A post shared by Wade and Leta (@wadeandleta) "The piece is an immersive, kinetic environment of color and mirrored surfaces, inviting viewers to move, reflect, and participate in the iconic location," artists Wade and Leda said in a statement. "It's a work about perception, process, and the shifting relationship between technology and art." To create the larger-than-life installation, the artist duo used Whisk, a Google Labs AI experiment that enables fast, visual ideation and brainstorming. Then, they combined mirrored aluminum composite panel, plywood, stainless steel, vinyl, and rubber to take the ideas off the screen and into reality. Though the artists have created participatory artwork in places like London, Tokyo and Beijing, this is their first large-scale outdoor artwork in NYC.

I tested Google's new AI dressing room — here's my verdict
I tested Google's new AI dressing room — here's my verdict

New York Post

time05-07-2025

  • Entertainment
  • New York Post

I tested Google's new AI dressing room — here's my verdict

I wasn't planning to try on Kate Hudson's yellow 'How to Lose a Guy in 10 Days' dress from my office desk this week. But that's what happened when I downloaded Doppl, Google's new AI fashion experiment that lets users virtually try on any outfit. Think Alicia Silverstone's digital closet in 'Clueless' — but AI, and on your phone. All you have to do is snap a full-body photo of yourself, upload the outfit you want to try and, within 30 to 60 seconds, your digital twin shows up wearing it. 8 The app was launched through Google Labs. Tamara Beckwith It's meant to replace your dressing room. So naturally, I gave it a shot. My 'Doppl' — unsettlingly similar to me, but with slightly-off proportions and longer hair — stood in the iconic yellow gown I've been obsessed with since middle school. Then it waved. Each animation is different. The app can create short videos of your AI clone moving in the outfit, usually with a slow turn or stiff pose. In this case, mine lifted an arm and posed like she was headed to the Oscars. 8 The yellow dress was nearly identical to the one Kate Hudson wears in 'How to Lose a Guy in 10 days.' Samantha Olander via Doppl 8 The animation gives people a chance to see themselves in different outfits. Samantha Olander via Doppl It was jarring. But I couldn't stop watching. The fit wasn't exact, but it was more accurate than I expected and enough to make me genuinely want the dress. Maybe need it. Doppl, launched last week through Google Labs, is part try-on tool, part tech experiment. Users can upload photos of outfits — whether it's a Pinterest fit, something from your favorite store's website or a sweater you spotted at a thrift shop — and the app creates a virtual version of you in the outfit. You can also skip using your own photo and choose from 20 preset AI models of different ages, races and body types. 8 Doppl currently supports images of tops, bottoms and dresses — but no shoes, bags or accessories. Google For now, Google says Doppl 'might not always get things right.' The app only supports tops, bottoms and dresses — no shoes, bags or accessories — and doesn't offer sizing advice or help with fit. Still, I wanted to see what it could do. One outfit I tested came from my Pinterest board — titled 'The Life of a Shopping Addict' — basically a running digital wish list of clothes I wish I owned. I picked a Saturday-night look: a black tank top and long, flowy skirt. Doppl gave me a short black mini dress and black boots that looked nothing like it. In some photos, it even added a few inches to my hair. 8 Users can upload screenshots of clothing from their favorite brands to see how the pieces might look on them. Samantha Olander via Doppl 8 When it works, it gives a surprisingly realistic preview of how the outfit might look on your body. Samantha Olander via Doppl Other outfits fared better. I uploaded a pair of jeans from Zara that had been sitting in my cart, and Doppl surprised me by generating an image that included the belt from the product photo, even though Google said accessories aren't yet supported. The rendering wasn't perfect, but as someone who's 5'10' and struggles to find jeans that are long enough online, it looked good enough. I bought them. From what I've seen, simpler outfits work best. The AI struggles with complex silhouettes — layered looks, blurry images, tricky fabrics — and occasionally invents new clothes from scratch if it can't figure things out. When it works, it's persuasive. 8 The app doesn't suggest sizes or guarantee fit, and layered or complex outfits may not render accurately. Samantha Olander via Doppl 8 Doppl uses generative AI to create digital try-ons, but results may include visual glitches or imagined clothing. Samantha Olander via Doppl When it doesn't, you're watching a glitchy clone wear something you didn't ask for. 'This is generative AI in an augmented reality format,' said Sucharita Kodali, a retail analyst at Forrester. 'I can't imagine that it wouldn't be useful. Is it going to be transformational and double anybody's business? No. But it'll be useful.' The app isn't perfect. Doppl skips over personalized questions like your height or measurements, which could make try-ons more accurate. You also have to be over 18, live in the U.S. and be logged into your Google account to use it. While it may not be replacing store dressing rooms anytime soon, for a free app on your phone, it gets surprisingly close. And it might just talk you into buying something you already wanted anyway.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store