Latest news with #GoogleVeo


CNA
18-07-2025
- CNA
Honor 400 Pro review: Is this near-flagship phone worth buying?
The Honor 400 Pro is part of a growing group of near-flagship devices. These offer higher specs than mid-range phones but are priced cheaper than flagship devices. The Honor 400 Pro offers plenty for its S$899 retail price. Highlights include a capable 200MP main camera, support for 100W fast charging (charger and cable included) and a 6,000 mAh battery that easily offers one full day of battery life. Honor differentiates the 400 Pro from the pack by going big on AI-imaging features. One of the highlights of this is an image-to-video feature that converts photos into five-second video clips. Processing is handled by a previous-gen Snapdragon 8 Gen 3, which powers last year's Android flagships like the Samsung S24 Ultra and Xiaomi 14. We'll explore its AI and camera capabilities, performance and battery life. Whether you're shopping for the best near-flagship phone in Singapore or just curious about the Honor 400 Pro's capabilities, this review has you covered. AI FEATURES The Honor 400 Pro goes big on AI, integrating features like AI Eraser, Face Tune, upscaling low-res images and AI cutouts into its gallery app. If you enjoy editing your photos, then it's very useful to have all these functions integrated into the Gallery app instead of using third-party apps like Snapseed. Paired with a camera array that includes a capable 200MP main lens, it also offers AI-enabled 50x zoom. Image to video is arguably the phone's party piece. It uses Google's Veo 2 AI video engine to automatically generate five-second animated clips from photos. It takes one to two minutes to generate clips. Unlike the full-fat Veo 2 and Veo 3 engines, this version doesn't accept prompts. So, if you're dissatisfied with your results, you'll need to re-submit the photo. 00:05 Min Your results will vary tremendously. It produced a convincing clip of my dog "walking". Other attempts produced mixed results that ranged from curious to creepy. One animation of a family picture in a restaurant added random people walking behind us, even though we were in an enclosed booth. Our expressions and movements also appeared unnatural. Pro tip – don't try to create an animation from a passport photo, unless you enjoy scaring yourself silly. CAMERAS The Honor 400 Pro's rear camera array includes three lenses: 200MP Ultra-clear AI Main Camera(f/1.9, OIS) 50MP Telephoto Camera (f/2.4 , OIS) 12MP Ultra Wide Camera(f/2.2) The 200MP main camera is quite impressive, capturing detailed photos even in low light. The telephoto lens is decent though image quality does suffer at full 50X zoom. While the ultra-wide lens is useful for 0.5 selfies, images can be soft in low-light. It produces decent results in brightly-lit environments although it lacks the detail and contrast of the main lens. PERFORMANCE The Honor 400 Pro may use a year-old processor but it doesn't feel compromised in use. The Snapdragon 8 Gen 3 chipset still posts competitive benchmark scores and has 12GB of RAM and 12GB of virtual RAM. In daily use, it feels fast and responsive. It performs everyday tasks smoothly. Scrolling socials and videos, editing images, chats and browsing are smooth – even when multiple apps are open. Gaming performance is good. Demanding titles like PUBG: Mobile and Call of Duty are smooth, even at high graphics settings. Honor's Magic OS 9.0 is generally fast and smooth in daily use. However, it is prone to bloatware. For instance, some pre-installed apps and services like Honor Docs duplicate more popular apps, though these can be uninstalled or disabled. DISPLAY Its 6.7 inch Amoled display, has a 2,800 x 1,280 resolution and up to 120Hz refresh rate. It offers deep blacks and rich, vibrant colours and supports HDR10+. This makes it outstanding for viewing video content on supported apps like Prime Video and YouTube. BATTERY LIFE The Honor 400 Pro's 6,000mAh silicon-carbon battery easily offers enough power for an entire day of use. After about one-and-a-half days of use, I had about 30 per cent of battery life remaining. This included this included four to five hours of surfing, listening to music and watching videos. HOW IT STACKS UP The Honor 400 Pro (S$893, Usual Price: S$899) delivers a compelling blend of performance and price. From its 200MP main camera and Snapdragon 8 Gen 3 chipset to its 100W fast charging, Honor has packed serious value into this device. The Honor 400 Pro stands out for users who prioritize photography, battery life, and AI-powered features. If you're shopping for a powerful phone for under S$1,000 in Singapore, this is worth considering. Pros: Great display with HDR support, good main camera, AI-integrated imaging, automatic photo-to-video generation, good battery life Cons: Average ultra-wide camera, photo-to-video feature produces mixed results


The Verge
17-07-2025
- Entertainment
- The Verge
Adobe's new AI tool turns silly noises into realistic audio effects
Adobe is launching new generative AI filmmaking tools that provide fun ways to create sound effects and control generated video outputs. Alongside the familiar text prompts that typically allow you to describe what Adobe's Firefly AI models should make or edit, users can now use onomatopoeia-like voice recordings to generate custom sounds, and use reference footage to guide the movements in Firefly-generated videos. The Generate Sound Effects tool that's launching in beta on the Firefly app can be used with recorded and generated footage, and provides greater control over audio generation than Google's Veo 3 video tool. The interface resembles a video editing timeline and allows users to match the effects they create in time with uploaded footage. For example, users can play a video of a horse walking along a road and simultaneously record 'clip clop' noises in time with its hoof steps, alongside a text description that says 'hooves on concrete.' The tool will then generate four sound effect options to choose from. This builds on the Project Super Sonic experiment that Adobe showed off at its Max event in October. It doesn't work for speech, but does support the creation of impact sounds like twigs snapping, footsteps, zipper effects, and more, as well as atmospheric noises like nature sounds and city ambience. New advanced controls are also coming to the Firefly Text-to-Video generator. Composition Reference allows users to upload a video alongside their text prompt to mirror the composition of that footage in the generated video, which should make it easier to achieve specific results, compared to repeatedly inputting text descriptions alone. Keyframe cropping will let users crop and upload images of the first and last frames that Firefly can use to generate video between, and new style presets provide a selection of visual styles that users can quickly select, including anime, vector art, claymation, and more. These style presets are only available to use with Adobe's own Firefly video AI model. The results leave something to be desired if the live demo I saw was any indication — the 'claymation' option just looked like early 2000s 3D animation. But Adobe is continuing to add support for rival AI models within its own tools, and Adobe's Generative AI lead Alexandru Costin told The Verge that similar controls and presets may be available to use with third-party AI models in the future. That suggests that Adobe is vying to keep its place at the top of the creative software foodchain as AI tools grow in popularity, even if it lags behind the likes of OpenAI and Google in the generative models themselves.


Phone Arena
15-07-2025
- Entertainment
- Phone Arena
AI travel videos are getting so real, people are falling for fake attractions
A Malaysian couple recently found themselves at the center of an AI hoax that turned a simple weekend trip into a costly and frustrating experience. After watching what appeared to be a professionally produced travel video, the elderly couple drove over 230 miles from Kuala Lumpur to a small town in Perak, only to discover that the entire attraction was fabricated by artificial intelligence. The video that fooled them featured a realistic news segment from a fictional broadcaster called "TV Rakyat." In the clip, a lifelike AI-generated reporter showcased the "Kuak Skyride," a scenic cable car ride said to exist in the town of Kuak Hulu. The footage showed lush mountain views, interviews with so-called tourists, and even a luxurious dining experience overlooking the landscape. The segment ended with a visit to a deer petting zoo. The entire video appeared authentic, complete with voiceovers and convincing visuals likely created using Google's Veo 3 model. According to local media including the Metro and the South China Morning Post , the couple checked into a hotel in Perak's Pengkalan Hulu area on June 30 and asked about the cable car ride. A hotel employee recounted the moment she realized the attraction didn't exist: Receive the latest Google news By subscribing you agree to our terms and conditions and privacy policy — @dyaaaaaaa._, a hotel employee, via Threads The woman was reportedly upset and said she planned to sue the journalist featured in the video. But the hotel employee had to break the news: the reporter was also AI-generated. 'Why would anyone want to lie?' the woman replied. 'There was even a reporter (in the video).' This wasn't an isolated incident. Another social media user reported their parents spent RM 9,000 (around $2,120 USD) to rent a van for the same trip, believing the video to be real. Reports suggest the video went viral across Malaysian social platforms before eventually being taken down due to public backlash. The situation raises important questions about the growing realism of AI-generated video content. If ordinary travelers can be misled by videos that seem indistinguishable from real-life footage, what does that mean for digital media going forward? Cases like this show that while generative video tools can be powerful for creativity, they also introduce risks around misinformation, especially when viewers are unaware of how convincing synthetic content can be. Personally, I think that as these tools become more accessible, we may need better labeling, regulations, or education to help viewers distinguish real from fake. The woman was reportedly upset and said she planned to sue the journalist featured in the video. But the hotel employee had to break the news: the reporter was also AI-generated. 'Why would anyone want to lie?' the woman replied. 'There was even a reporter (in the video).'This wasn't an isolated incident. Another social media user reported their parents spent RM 9,000 (around $2,120 USD) to rent a van for the same trip, believing the video to be real. Reports suggest the video went viral across Malaysian social platforms before eventually being taken down due to public situation raises important questions about the growing realism of AI-generated video content. If ordinary travelers can be misled by videos that seem indistinguishable from real-life footage, what does that mean for digital media going forward?Cases like this show that while generative video tools can be powerful for creativity, they also introduce risks around misinformation, especially when viewers are unaware of how convincing synthetic content can be. Personally, I think that as these tools become more accessible, we may need better labeling, regulations, or education to help viewers distinguish real from fake.


South China Morning Post
10-07-2025
- Entertainment
- South China Morning Post
How new video of Will Smith eating spaghetti shows incredible progress in AI video
Gone are the days of six-fingered hands or distorted faces – AI-generated video is becoming increasingly convincing, attracting Hollywood, artists and advertisers while shaking the foundations of the creative industry. Advertisement To measure the progress of AI video, you need only look at Will Smith eating spaghetti. Since 2023, this unlikely sequence – entirely fabricated – has become a technological benchmark for the industry. Two years ago, the actor appeared blurry, his eyes too far apart, his forehead exaggeratedly protruding, his movements jerky, and the spaghetti did not even reach his mouth. A version published a few weeks ago by a user of Google's Veo 3 platform, however, showed no apparent flaws whatsoever. Advertisement 'Every week, sometimes every day, a different [video] comes out that's even more stunning than the next,' said Elizabeth Strickler, director of media innovation and entrepreneurship programmes at Georgia State University in the US.


Hindustan Times
01-07-2025
- Entertainment
- Hindustan Times
Want cinematic-looking AI videos? Try these 5 prompt techniques
As AI-generated videos gain popularity across content creation platforms, creators are quickly learning that the secret to realistic and professional-looking visuals lies in one key factor that is better prompts. These AI video prompt tips can take your videos to the next level!(Pixabay) According to video experts and early users of tools like Google Veo, Midjourney, and Sora, the quality and structure of prompts significantly affect how cinematic or coherent the final video appears. These tools, while powerful, rely heavily on the way users describe the scene, essentially turning the prompt into a virtual director's script. Technique 1: Set a structure first, not style One of the most common mistakes users make is starting their prompts with vague adjectives such as 'a beautiful sunset' or 'a stunning cityscape.' Experts recommend leading with structure instead. For instance, a prompt like 'overhead drone shot of a bustling city skyline at night, cars moving below, buildings glowing with neon lights' yields much better results than generic descriptions like 'a beautiful cinematic video of a city at night'. The idea is to think like a movie director, setting the scene visually and letting AI fill in the details. Technique 2: Use cinematic language like a camera operator does Cinematic results demand cinematic cues. Prompts that include camera angles and movements, like 'low-angle tracking shot,' 'overhead drone view,' or 'static close-up', help AI generators interpret the visual composition more accurately. These terms are rooted in traditional filmmaking and signal how the virtual camera should behave, adding depth and dynamic quality to the scene. Here are some cinematic-style prompt examples: 'Over-the-shoulder shot of a woman typing on a laptop in a dimly lit café, warm lighting, rain tapping the window.' 'Crane shot rising above a wedding ceremony in an open field at sunset, guests applauding, petals falling in slow motion.' 'Tracking shot of a boy running through a cornfield, sunlight flickering through the leaves, handheld camera effect.' 'Static close-up of hands lighting a candle in a dark room, soft shadows, flickering flame reflecting in the eyes.' 'POV shot of a motorcyclist weaving through a forest trail, dirt flying, camera slightly shaky for realism.' Using prompts like these doesn't just describe what's in the scene; it tells the AI how to frame, light, and move through it, just like a real camera crew would. Technique 3: Break the scene into beats Instead of trying to cram an entire story into one sentence, experts suggest breaking the prompt into visual segments, often referred to as 'beats.' This technique gives AI models a clearer sense of progression and pacing, even if full transitions aren't yet supported. Here are a few examples for different scenes: Beat 1: Wide aerial shot of a mist-covered forest at dawn, sunlight breaking through the trees Beat 2: Close-up of dew dripping from a leaf, soft lighting, quiet atmosphere Beat 3: Slow pan across a narrow trail as a hiker emerges from the fog, lens flare glinting off their backpack Another example: Beat 1: Static shot of a crowded metro platform, commuters standing still, announcements echoing Beat 2: Over-the-shoulder shot of a young woman stepping onto the train, her reflection visible in the window Beat 3: Tracking shot from inside the train as it moves through a tunnel, lights flickering past This method helps guide AI models to create more intentional, narrative-driven visuals, even if full scene transitions aren't yet supported. Technique 4: Add Motion, mood and details for realism Adding movement cues like 'camera pans upward' or 'zoom pulls back' can enhance realism. Details such as 'fog drifting,' 'rain on glass,' or 'leaves swirling in wind' create a lifelike feel. Mood-setting phrases like 'golden hour light' or 'cold overcast sky' further improve cinematic quality. Since AI outputs vary, testing and tweaking prompts is key. Tools like Google Veo respond well to detailed inputs, often delivering professional-looking results. Technique 5: Testing and reiterating make it better Since AI video tools are still evolving, experts emphasise the need to test and iterate. Running the same prompt multiple times and tweaking specific words often leads to better outcomes. Google's Veo, in particular, has shown more consistent results than many other generators, especially when using detailed and structured prompts.