logo
I used Google's Veo 3 to create AI ASMR food videos

I used Google's Veo 3 to create AI ASMR food videos

Tom's Guide20-07-2025
Google's Veo 3 AI video model is a league above any of its competitors for one key reason — sound. You can prompt not just what you see on screen, but also what you hear.
Built by Google's DeepMind lab, the first Veo model debuted in May 2024, and each new generation has added more functionality. It has always excelled in motion accuracy and physics understanding compared to competitors, but the addition of sound was a game-changer.
You can use it to prompt a short commercial, a scene from a movie you're writing, or even a music video. But there's one use I've seen more than any other — ASMR (autonomous sensory meridian response): those gentle tapping, whispering, and ambient sounds that trigger a tingling sensation for some people.
To see just how far this could go, I created a series of ASMR food prompts — each designed to generate a matching video and sound around something culinary.
Veo 3 is now available in the Gemini app. Just select the Video option when starting a new prompt, type what you want, and an 8-second clip is generated.
While Gemini isn't necessarily the best way to access Veo 3 — I'd recommend Freepik, Fal, Higgsfield, or Google Flow — it's easy to use and gets the job done.
A key advantage of using Gemini directly is that it automatically interprets and enhances your prompts. So if you ask for 'a cool ASMR video featuring lasagna,' that's what you'll get.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
You can also be more specific using something called structured prompting — labeling each moment with timestamps and scene descriptions. But unless you need precise control, a simple paragraph (aka narrative prompting) is usually more effective.
The first task in any AI project is thinking about your prompt. Models are getting better at interpreting intent, but it's still better to be specific if you know what you want.
I knew I wanted ASMR food videos, so I started with a test: 'ASMR food video with sound.'
The result? Decent. It essentially gave me the lasagna I had in mind. Then I refined it — outlining specific food types, adding sound descriptions, and even trying a structured prompt for a fizzy drink with ice.
Most of the time, narrative prompts work best. Just describe what you want to see, the flow of the video, and how sound should come through.
The first prompt, 'ASMR food video with sound,' produced a stunning clip of someone sliding a fork into a slice of lasagna. You hear the squish as the fork enters, then the clunk as it hits the plate. This is one case where I wish Veo 3 had an 'extend clip' button.
There was no other prompting involved, so I had no way of identifying what the food would be, how the sound would come out or even if the sound would work. This is why it's important to be specific when prompting AI models, even ones in chatbots like Gemini.
Next, I went more specific — a longer, narrative-style prompt asking Veo 3 to generate a close-up of a chef preparing and eating satisfying food in a well-lit kitchen.
I asked for slow-motion visuals of ingredients being chopped, the sizzling sound of butter melting in a pan, and a crunch as the chef takes a bite.
I also added this line: 'Emphasize audio quality: clean, layered ASMR soundscape without music' to direct not just the sound, but to the style of sound and what I don't want to hear.
For the final prompt I started with an image. I used Midjourney v7 to create a picture of a woman looking at rainbow popcorn, then added the prompt 'ASMR food' in Gemini.
Visually, the result was stunning — but for some reason, the woman says in a voiceover, 'This is delicious, this rainbow popcorn.' That's on me — I didn't specify whether she should speak, or what she should say.
A simple fix: put any speech you want in quotes. For example, I could have prompted her to say 'I love to watch popcorn pop,' and emphasized the word pop. I also could've specified that she was speaking on camera — and Veo 3 would have synced the lip movement to match.
Overall, Veo 3 delivers impressive results, especially when it comes to generating high-quality sound that accurately reflects the visuals. While there are a few quirks to navigate, like unintended voiceovers or slightly underbaked looking lasagna — these are easily addressed with more specific prompting.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google Confirms Pixel 10 Pro Details With Attack On Rivals
Google Confirms Pixel 10 Pro Details With Attack On Rivals

Forbes

timean hour ago

  • Forbes

Google Confirms Pixel 10 Pro Details With Attack On Rivals

With a simple caption of 'Google Pixel 10: Soon', the competitive tension ahead of August's 'Made By Google' launch event has been elevated. In addition, Google has also leaned into the competition to highlight why customers might be better off buying the upcoming Pixel 10 and Pixel 10 Pro smartphones: The Pixel 10 Pro Challenge To paraphrase Wikipedia: Diss Track; a song whose primary purpose is to verbally attack someone else, usually another artist. Diss tracks are often the result of an existing, escalating feud between the two people. It's hard not to think that Google's latest teaser video of the Pixel 10 family doesn't fall under this classic definition. While it holds short of naming the 'new phone with a feature coming soon, that has been coming soon for a full year', there's no manufacturer other than Apple, no other smartphone other than the iPhone, that this could apply to. Rather than who, the more interesting question is perhaps why, and why now? The Pixel 10 Pro Builds on Today's Strength One reason may be the current state of the smartphone market. The Android platform, no matter the manufacturer, is significantly ahead of iOS and the iPhone. Artificial Intelligence can be found in numerous digital assistants; some are voice-based, others are on-device clients that help sift through information to offer better feedback and present options. These are all heavily promoted and have unlocked many new features across all ranges. Pixel is no different, although it does have some advantages in being able to tap directly into Google's AI services. It also pushes the other side of the OS coin. The competition, Apple, is not delivering the same experience as Android. The buzz around artificial intelligence is not going away, and if all of that buzz can be focused on your own product and not the competition, that's going to be a benefit both in the short and the long term. The Competition Is Behind The Pixel 10 Pro Tim Cook's Apple is fully aware of this. With previous technological leaps, the perception that Apple is late to the market but comes in with the right solution has been a strong market force in both the allure of Apple and framing its hardware at the apex. This has not been the case with AI. The awkwardly backronymed Apple Intelligence will always be on the back foot when compared to Google's approach due to a higher reliance of on-device processing and reducing a user's digital footprint outside of their smartphone. That's a technical answer, and one that is relatively clear-cut with the geekerati, but it's not as simple a sell to the general consumers. They see an Apple product that does not match the competition… and those who haven't yet made the connection now have this latest video to nudge them towards Google's framing of AI, rather than Apple. Extending AI With Pixel 10 Pro Google is setting the ground for its upcoming expansion of mobile artificial intelligence. Android 16 brings a raft of new features and will make its debut on the competitively priced Pixel 10 and Pixel 10 Pro smartphones… phones that Google has once again confirmed it will be launching on Aug. 20. Not only will the Pixel 10 family show off the new version of Android to the public, but it will also bring with it Google's own first-party apps and services that are unique to Google. These will, as in previous years, offer significant support to users through AI. Waiting For The Pixel 10 Pro In this latest video, Google has stated it is in the lead with AI, its competition has failed to deliver, and it is ready to provide more AI with the new Pixel 10 and Pixel 10 Pro smartphones. Now to see how it delivers on those claims at the end of the month. Read the latest Pixel 10 Pro decisions made by Google…

New Models From OpenAI, Anthropic, Google – All At The Same Time
New Models From OpenAI, Anthropic, Google – All At The Same Time

Forbes

time3 hours ago

  • Forbes

New Models From OpenAI, Anthropic, Google – All At The Same Time

It's Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today's news is a very full stocking of brand new LLM editions from three of the biggies – OpenAI, Anthropic, and Google. I'll go over these one by one, discussing what these most recent model iterations bring to the table. OpenAI OSS Models First, the tech community is getting an eye on OpenAI OSS 120b and OSS 20b, the first open-weight systems from this company since ChatGPT 2. Now, coverage from Computerworld and elsewhere points out that, although these models have Apache licenses, they are not fully open source in the conventional way, ,but partly open: the weights are open source, while the training data is not. Powered by one 80GB GPU chip, the larger OSS models, according to the above report, 'achieves parity' with the o4-mini model vis a vis reasoning power. The smaller one can run on smartphones and other edge devices. The models come quantized with MXFP4, a low-precision data type for accelerating matrix multiplications. Let Them Work Another interesting aspect of the new OSS models has to do with chain of thought, something that has revolutionized inference, while raising questions about comparative methodology. Basically, we want the LLMs to be accurate, but engineers have found that, in many cases, restricting or overly guiding systems causes them to 'hide' CoT. So OpenAI has chosen not to optimize the models in this way. 'OpenAI is intentionally leaving Chain of Thought (CoTs) unfiltered during training to preserve their usefulness for monitoring, based on the concern that optimization could cause models to hide their real reasoning,' writes Roger Montti at Search Engine Journal. 'This, however, could result in hallucinations.' Montti cites the following model card report from OpenAI: 'In our recent research, we found that monitoring a reasoning model's chain of thought can be helpful for detecting misbehavior. We further found that models could learn to hide their thinking while still misbehaving if their CoTs were directly pressured against having 'bad thoughts.'…In accord with these concerns, we decided not to put any direct optimization pressure on the CoT for either of our two open-weight models. We hope that this gives developers the opportunity to implement CoT monitoring systems in their projects and enables the research community to further study CoT monitorability.' So, the models are allowed to have these 'bad thoughts' in aid of, I suppose, transparency. OpenAI is then honest about the higher chance for hallucinations, so that users know that this trade-off has been made. Claude Opus 4.1 Here's how spokespersons rolled out the announcement of this new model Aug. 5: 'Today we're releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning. We plan to release substantially larger improvements to our models in the coming weeks. Opus 4.1 is now available to paid Claude users and in Claude Code. It's also on our API, Amazon Bedrock, and Google Cloud's Vertex AI. Pricing is the same as Opus 4.' What's under the hood? The new Opus 4.1 model ups SWE-Bench Verified marks, and boosts agentic research skills. A breakdown of capabilities shows a 2-point increase in SWE-based agentic coding (72.5% - 74.5%) and improvement in graduate-level reasoning with GPQA Diamond (79.6% - 80.9%) over Opus 4, and slight increases in visual reasoning and agentic tool use. For a model set that pioneered human-like user capabilities, this continues to push the envelope. As for strategy: 'The release comes as Anthropic has achieved spectacular growth, with annual recurring revenue jumping five-fold from $1 billion to $5 billion in just seven months, according to industry data,' writes Michael Nunez at VentureBeat. 'However, the company's meteoric rise has created a dangerous dependency: nearly half of its $3.1 billion in API revenue stems from just two customers — coding assistant Cursor and Microsoft's GitHub Copilot — generating $1.4 billion combined. … The upgrade represents Anthropic's latest move to fortify its position before OpenAI launches GPT-5, expected to challenge Claude's coding supremacy. Some industry watchers questioned whether the timing suggests urgency rather than readiness.' Regardless, this is big news in and of itself, including for the millions of users who rely on Claude for business process engineering or anything else. Genie 3 This is the latest in the series of Genie models coming out of Google's DeepMind lab that create controlled environments. In other words, this is a gaming world model. Proponents of the new model cite longer-term memory over Genie 2's limit of about 10 seconds, as well as better visual fidelity and real-time responses. 'DeepMind claims that the new system can generate entire worlds that you can interact with consistently for several minutes in up to 720p resolution,' reports Joshua Hawkins at BGR. 'Additionally, the company says that the system will be able to respond to what it calls 'promptable world events' with real-time latency. Based on what the videos show off, it seems like Google has taken a major step forward in creating entire video game worlds using AI.' 'Genie 3 is the first real-time interactive general-purpose world model,' said DeepMind's Shlomi Fruchter in a press statement according to a TechCrunch piece suggesting that the lab considers Genie 3 to be a 'stepping stone to AGI,' a big claim in these interesting times. 'It goes beyond narrow world models that existed before. It's not specific to any particular environment. It can generate both photo-realistic and imaginary worlds, and everything in between.' All of these new models are getting their first rafts of public users today! It's enough to make your head spin, especially if you're responsible for any kind of implementation. What do you choose? To be fair, there is some amount of specialization involved. But many professionals closest to the industry would tell you it's the speed of innovation that's challenging: given the track record of most companies, by the time you get something worked into business operations, it's likely to already be obsolete! Stay tuned.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store