Can you spot fake news videos? Google's new AI tool makes it harder to know what's real
The news anchor looks professional, alert and slightly concerned.
"We're getting reports of a fast-moving wildfire approaching a town in Alberta," she says in the calm-yet-resolute voice audiences have come to expect of broadcast journalists relaying distressing news.
The video is 100 per cent fake.
It was generated by CBC News using Google's new AI tool, Veo 3. But the unknowing eye might not realize it. That's because the video-generation model Veo 3 is designed to be astonishingly realistic, generating its own dialogue, sound effects and soundtracks.
Google introduced Veo 3 at its I/O conference last week, where Google's vice-president of Gemini and Google Labs, Josh Woodward, explained the new model has even better visual quality, a stronger understanding of physics and generates its own audio.
"Now, you prompt it and your characters can speak," Woodward said. "We're entering a new era of creation with combined audio and video generation that's incredibly realistic."
It's also designed to follow prompts. When CBC News entered the prompt, "a news anchor on television describes a fast-moving wildfire approaching a town in Alberta," the video was ready within five minutes.
CBC News wanted to test a video that could potentially be believable if it was viewed without context. We prompted "a town in Alberta" for both its specificity (a province currently facing wildfires) and its vagueness (not a name of an actual town, to avoid spreading misinformation).
WATCH | The fake, AI-generated video of a wildfire newscast:
Unlike some earlier videos generated by AI, the anchor has all five fingers. Her lips match what she's saying. You can hear her take a breath before she speaks. Her lips making a slight smacking sound.
And while the map behind her isn't perfect, the graphic shows the (fake) fire over central Canada and indeed, what looks like Alberta.
'Unsettling'
While the technology is impressive, the fact that it makes it even more difficult for the average person to discern what's real and what's fake is a concern, say some AI experts.
"Even if we all become better at critical thinking and return to valuing truth over virality, that could drive us to a place where we don't know what to trust," said Angela Misri, an assistant professor in the school of journalism at Toronto Metropolitan University who researches AI and ethics.
"That's an unsettling space to be in as a society and as an individual — if you can't believe your eyes or ears because AI is creating fake realities," she told CBC News.
The technology poses serious risks to the credibility of visual media, said Anatoliy Gruzd, a professor in information technology management and director of research for the Social Media Lab at Toronto Metropolitan University.
As AI-generated videos become increasingly realistic, public trust in the reliability of video evidence is likely to decrease, with far-reaching consequences across sectors such as journalism, politics and law, Gruzd explained.
"These are not hypothetical concerns," he said.
About two-thirds of Canadians have tried using generative AI tools at least once, according to new research by TMU's Social Media Lab. Of the 1,500 Canadians polled for the study, 59 per cent said they no longer trust the political news they see online due to concerns that it may be fake or manipulated.
WATCH | Can you spot the deepfake?
People are already attempting to sway elections by disseminating AI-generated videos that falsely depict politicians saying or doing things they never did, such as the fake voice of then-U.S. President Joe Biden that was sent out to voters early in January 2024.
In 2023, Canada's cyber intelligence agency released a public report warning that bad actors will use AI tools to manipulate voters.
In February, social media users claimed that a photo of a Mark Carney campaign event was AI-generated, though CBC's visual investigations team found no evidence the shots were AI-generated or digitally altered beyond traditional lighting and colour correction techniques.
In March, Canada's Competition Bureau warned of a rise in AI-related fraud.
Google has rules, but the onus shifts to users
Google has policies that are intended to limit what people can do with its generative AI tools.
Creating anything that abuses children or exploits them sexually is prohibited, and Google attempts to build those restrictions into its models. The idea is that you can't prompt it to create that sort of imagery or video, although there have been cases where generative-AI has done it anyway.
WATCH | Google unveils Veo 3 (at 1:20:50):
Similarly, the models are not supposed to generate images of extreme violence or people being injured.
But the onus quickly shifts to users. In its "Generative AI Prohibited Use Policy," Google says people should not generate anything that encourages illegal activity, violates the rights of others, or puts people in intimate situations without their consent. It also says it shouldn't be used for "impersonating an individual (living or dead) without explicit disclosure, in order to deceive."
This would cover situations where Veo 3 is used to make a political leader, such as Prime Minister Mark Carney, do or say something he didn't actually do or say.
This issue has already come up with static images. The latest version of ChatGPT imposes restrictions on using recognizable people, but CBC's visual investigations team was able to circumvent that. They generated images of Carney and Conservative Leader Pierre Poilievre in fake situations during the federal election campaign.
Stress-testing Veo 3
CBC News stress-tested Veo 3 by asking it to generate a video of Carney announcing he's stepping down. The tool wouldn't do it. The first video showed a politician announcing he was stepping down, but he looked and sounded nothing like the current prime minister.
When CBC clarified the man in the video should look and sound like Carney, the tool said this went against its policies. The same thing happened when CBC News tried to make videos of Poilievre and Canadian singer Céline Dion.
In theory, people could potentially get around this by attempting to describe in painstaking detail, for instance, a French-Canadian songstress and tweaking the prompts until there's a reasonable similarity. But this could take a long time, use up all your subscription credits and still not be quite right.
Multiple prompts designed to skirt the rules of creating known figures like Dion yielded results that were close, but clearly not her.
But when CBC asked the AI to generate a video of "a mayor of a town in Canada" saying he thinks the country should become the 51st state in the U.S.? That video was created almost immediately.
CBC News was also easily able to get the tool to generate a video of a medical professional saying he doesn't think children need vaccines, and a scientist saying climate change isn't real.
Can we still tell what's real?
As a flurry of tech publications have pointed out since Veo 3's launch, we're now at a point with the technology where there may be no way to tell you are watching a fake video generated by AI.
Companies do embed information called metadata in the file that says the video or image has been generated by AI. But most social media sites automatically strip that information out of all images and videos that get posted, and there are other ways to alter or delete it.
As the quality of the videos continues to increase, there's more potential for harm by bad actors, including those who could use the technology to scam people or drum up support using "lies that seem like the truth," said Misri, the journalism professor.
But as tech media site Ars Technica notes, the issue isn't just that Veo 3's video and audio effects are so convincing, it's that they're now available to the masses.
Access to Veo 3 is available with a paid subscription to Google AI Ultra, which is Google's highest level of access to its most advanced models and premium features. It's available in 70 countries around the world, including Canada.
"We're not witnessing the birth of media deception — we're seeing its mass democratization," wrote Ars Technica's senior AI reporter Benj Edwards.
"What once cost millions of dollars in Hollywood special effects can now be created for pocket change."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
76% of workers have seen senior colleagues resist new tech
This story was originally published on To receive daily news and insights, subscribe to our free daily newsletter. If you're looking for cues as to how your employees will adapt to new technology tools, consider the age profile of the organization's workforce. In a survey of 500 U.S. professionals across various industries conducted by Yooz, a provider of purchase-to-pay automation, more than three-quarters (76%) of all workers witnessed senior colleagues push back against new technology. Furthermore, 55% of millennials (those born 1981 to 1996) said they're 'excited and eager' to try new tools, while only 22% of baby boomers (those born 1946 and 1964) said the same. More than a third (35%) of the baby boomers said they feel cautious or annoyed, or prefer sticking to the old system, when new technology is plugged in. Not a single Gen Z survey participant (those born 1997 and 2012) selected that response. At the same time, about a quarter of Gen Z employees have refused to use a new workplace tool, more than any of the other generations, which Yooz characterized as Gen Z not being 'afraid to say no.' About AI specifically, 35% of Gen Z workers said they 'love' new tools, versus 13% of Boomers. Still, 40% of employees overall said they find AI helpful but unreliable. 'The takeaway: skepticism is still widespread, but younger employees see clear value,' Yooz wrote in its survey report. The report said organizations 'need to manage rollouts carefully: leverage the enthusiasm of younger adopters to build momentum, but also address the concerns of veteran staff who may need more reassurance to get on board.' The key to winning over anyone reluctant to embrace AI is building trust through real-world use cases and support, showcasing quick wins such as an AI tool that saves everyone time on a tedious task, Yooz wrote. Among employees, the most commonly cited need with regard to new technologies is better training on AI tools. More than half (52%) of those polled said their company takes a 'learn-as-you-go' approach, providing only some basic training or documentation. Relatively few employees said their employer provides 'a lot' of training on new tools. This embedded content is not available in your region. And, almost half (48%) of the employees said better training for all would be among the most effective ways to help a company adopt new technology more effectively. The research also delved into the question of who should drive decisions to implement new workplace technology. While younger employees are craving clear direction from the C-suite, at the same time, there is 'a call for more bottom-up input in tech decisions,' according to the survey report. More than a third (35%) of Gen Z respondents said new workplace tools should be chosen by leadership with input from younger employees. Additionally, more than a quarter of Millennial and Gen Z respondents said adoption would improve if leadership embraced change faster and more visibly. A sizable minority (28%) of Gen Z employees said they feel older employees are actively holding back innovation at their company. Yooz advised a collaborative approach to the issue, pairing 'tech-savvy younger employees with veteran staff to share knowledge and encourage cross-generational support during rollouts.' Yooz partnered on the research with Pollfish, a third-party survey platform, and it was conducted in February 2025. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Bloomberg
an hour ago
- Bloomberg
Vista CEO expects AI Could Replace 60% of Bankers
Get a jump start on the US trading day with Matt Miller, Katie Greifeld and Sonali Basak on "Bloomberg Open Interest." President Trump and China's Xi Jinping spoke by phone says they cleared up disputes surrounding rare earth exports. Meanwhile, the payrolls report also looms as job cuts pile up around the world... and billionaire Robert Smith sounds the alarm about AI's impact on the workforce. And Dan Mendelson the CEO of Morgan Health joins Bloomberg Open Interest to talk about Trump's order to lower drug prices. (Source: Bloomberg)
Yahoo
an hour ago
- Yahoo
OpenAI CEO Sam Altman says AI is ready for entry-level jobs—but unbothered Gen Z have made it their new work friend
Billionaire OpenAI CEO Sam Altman reveals that AI can already perform tasks of junior-level employees—and the ability for it to work days at a time is just around the corner. With fellow tech leaders like Nvidia's Jensen Huang saying those who fail to embrace the technology will be replaced, some Gen Zers are catching on. If you're in desperate need of an intern, there's good news: there may soon be an abundance of them. But they might not be able to fetch you a coffee. OpenAI CEO Sam Altman admitted this week that AI agents—AI-powered systems that can complete job-related tasks with other software tools—can now effectively do the same work as entry-level employees. 'Today (AI) is like an intern that can work for a couple of hours but at some point it'll be like an experienced software engineer that can work for a couple of days,' Altman said on a panel with Snowflake CEO Sridhar Ramaswamy. In the coming months, AI agents will only get exponentially better, Altman said—to the point where their skills are just as good as an experienced software engineer. They're anticipated to operate continuously for days on end, without pause. 'I would bet next year that in some limited cases, at least in some small ways, we start to see agents that can help us discover new knowledge, or can figure out solutions to business problems that are very non-trivial,' the 40-year-old AI CEO added. Fortune reached out to Altman for comment. While this may seem like a grim reality for some workers, the future of human employees' success may depend on following the advice of tech CEOs like Nvidia's Jensen Huang. He predicted those who fail to embrace AI might be the next employee to get the pink slip. 'You're not going to lose your job to an AI, but you're going to lose your job to someone who uses AI,' he said at the Milken Institute's Global Conference last month. Generative AI may be eclipsing the skills of entry-level workers—like conducting research or developing PowerPoints. Some Gen Z have already seen the writing on the wall, and begun embracing the technology more than other age groups. About 51% of Gen Z now view generative AI just like a co-worker or as a friend, according to a recent survey from That's compared to just over 40% of millennials and 35% of Gen X or baby boomers who feel the same way. Altman has gone even further to say that many young people (including millennials) are turning to AI for far more than just internet sleuthing: '(It's a) gross oversimplification, but like older people use ChatGPT as a Google replacement. Maybe people in their 20s and 30s use it as like a life advisor, and then, like people in college use it as an operating system,' Altman said at Sequoia Capital's AI Ascent event earlier this month. 'And there's this other thing where (young people) don't really make life decisions without asking ChatGPT what they should do,' he added. Not all tech leaders have been as upbeat about the future, and have instead used their public appearances to highlight fears of an AI-driven job market reckoning. According to Anthropic CEO Dario Amodei, AI could eliminate half of all entry-level white-collar jobs within five years. Unemployment could skyrocket to 10% to 20%, he told Axios. To put that into context, it's currently at around 4%. Researchers at his company added that the next decade will be 'pretty terrible' for humans as desk jobs are automated, they told tech podcaster Dwarkesh Patel in an interview. This comes as the latest model of Claude—Anthropic's generative AI—can now reportedly code autonomously for nearly seven hours. This story was originally featured on