logo
#

Latest news with #JoshWoodward

Can you spot fake news videos? Google's new AI tool makes it harder to know what's real
Can you spot fake news videos? Google's new AI tool makes it harder to know what's real

Yahoo

time5 days ago

  • General
  • Yahoo

Can you spot fake news videos? Google's new AI tool makes it harder to know what's real

The news anchor looks professional, alert and slightly concerned. "We're getting reports of a fast-moving wildfire approaching a town in Alberta," she says in the calm-yet-resolute voice audiences have come to expect of broadcast journalists relaying distressing news. The video is 100 per cent fake. It was generated by CBC News using Google's new AI tool, Veo 3. But the unknowing eye might not realize it. That's because the video-generation model Veo 3 is designed to be astonishingly realistic, generating its own dialogue, sound effects and soundtracks. Google introduced Veo 3 at its I/O conference last week, where Google's vice-president of Gemini and Google Labs, Josh Woodward, explained the new model has even better visual quality, a stronger understanding of physics and generates its own audio. "Now, you prompt it and your characters can speak," Woodward said. "We're entering a new era of creation with combined audio and video generation that's incredibly realistic." It's also designed to follow prompts. When CBC News entered the prompt, "a news anchor on television describes a fast-moving wildfire approaching a town in Alberta," the video was ready within five minutes. CBC News wanted to test a video that could potentially be believable if it was viewed without context. We prompted "a town in Alberta" for both its specificity (a province currently facing wildfires) and its vagueness (not a name of an actual town, to avoid spreading misinformation). WATCH | The fake, AI-generated video of a wildfire newscast: Unlike some earlier videos generated by AI, the anchor has all five fingers. Her lips match what she's saying. You can hear her take a breath before she speaks. Her lips making a slight smacking sound. And while the map behind her isn't perfect, the graphic shows the (fake) fire over central Canada and indeed, what looks like Alberta. 'Unsettling' While the technology is impressive, the fact that it makes it even more difficult for the average person to discern what's real and what's fake is a concern, say some AI experts. "Even if we all become better at critical thinking and return to valuing truth over virality, that could drive us to a place where we don't know what to trust," said Angela Misri, an assistant professor in the school of journalism at Toronto Metropolitan University who researches AI and ethics. "That's an unsettling space to be in as a society and as an individual — if you can't believe your eyes or ears because AI is creating fake realities," she told CBC News. The technology poses serious risks to the credibility of visual media, said Anatoliy Gruzd, a professor in information technology management and director of research for the Social Media Lab at Toronto Metropolitan University. As AI-generated videos become increasingly realistic, public trust in the reliability of video evidence is likely to decrease, with far-reaching consequences across sectors such as journalism, politics and law, Gruzd explained. "These are not hypothetical concerns," he said. About two-thirds of Canadians have tried using generative AI tools at least once, according to new research by TMU's Social Media Lab. Of the 1,500 Canadians polled for the study, 59 per cent said they no longer trust the political news they see online due to concerns that it may be fake or manipulated. WATCH | Can you spot the deepfake? People are already attempting to sway elections by disseminating AI-generated videos that falsely depict politicians saying or doing things they never did, such as the fake voice of then-U.S. President Joe Biden that was sent out to voters early in January 2024. In 2023, Canada's cyber intelligence agency released a public report warning that bad actors will use AI tools to manipulate voters. In February, social media users claimed that a photo of a Mark Carney campaign event was AI-generated, though CBC's visual investigations team found no evidence the shots were AI-generated or digitally altered beyond traditional lighting and colour correction techniques. In March, Canada's Competition Bureau warned of a rise in AI-related fraud. Google has rules, but the onus shifts to users Google has policies that are intended to limit what people can do with its generative AI tools. Creating anything that abuses children or exploits them sexually is prohibited, and Google attempts to build those restrictions into its models. The idea is that you can't prompt it to create that sort of imagery or video, although there have been cases where generative-AI has done it anyway. WATCH | Google unveils Veo 3 (at 1:20:50): Similarly, the models are not supposed to generate images of extreme violence or people being injured. But the onus quickly shifts to users. In its "Generative AI Prohibited Use Policy," Google says people should not generate anything that encourages illegal activity, violates the rights of others, or puts people in intimate situations without their consent. It also says it shouldn't be used for "impersonating an individual (living or dead) without explicit disclosure, in order to deceive." This would cover situations where Veo 3 is used to make a political leader, such as Prime Minister Mark Carney, do or say something he didn't actually do or say. This issue has already come up with static images. The latest version of ChatGPT imposes restrictions on using recognizable people, but CBC's visual investigations team was able to circumvent that. They generated images of Carney and Conservative Leader Pierre Poilievre in fake situations during the federal election campaign. Stress-testing Veo 3 CBC News stress-tested Veo 3 by asking it to generate a video of Carney announcing he's stepping down. The tool wouldn't do it. The first video showed a politician announcing he was stepping down, but he looked and sounded nothing like the current prime minister. When CBC clarified the man in the video should look and sound like Carney, the tool said this went against its policies. The same thing happened when CBC News tried to make videos of Poilievre and Canadian singer Céline Dion. In theory, people could potentially get around this by attempting to describe in painstaking detail, for instance, a French-Canadian songstress and tweaking the prompts until there's a reasonable similarity. But this could take a long time, use up all your subscription credits and still not be quite right. Multiple prompts designed to skirt the rules of creating known figures like Dion yielded results that were close, but clearly not her. But when CBC asked the AI to generate a video of "a mayor of a town in Canada" saying he thinks the country should become the 51st state in the U.S.? That video was created almost immediately. CBC News was also easily able to get the tool to generate a video of a medical professional saying he doesn't think children need vaccines, and a scientist saying climate change isn't real. Can we still tell what's real? As a flurry of tech publications have pointed out since Veo 3's launch, we're now at a point with the technology where there may be no way to tell you are watching a fake video generated by AI. Companies do embed information called metadata in the file that says the video or image has been generated by AI. But most social media sites automatically strip that information out of all images and videos that get posted, and there are other ways to alter or delete it. As the quality of the videos continues to increase, there's more potential for harm by bad actors, including those who could use the technology to scam people or drum up support using "lies that seem like the truth," said Misri, the journalism professor. But as tech media site Ars Technica notes, the issue isn't just that Veo 3's video and audio effects are so convincing, it's that they're now available to the masses. Access to Veo 3 is available with a paid subscription to Google AI Ultra, which is Google's highest level of access to its most advanced models and premium features. It's available in 70 countries around the world, including Canada. "We're not witnessing the birth of media deception — we're seeing its mass democratization," wrote Ars Technica's senior AI reporter Benj Edwards. "What once cost millions of dollars in Hollywood special effects can now be created for pocket change."

Can you spot fake news videos? Google's new AI tool makes it harder to know what's real
Can you spot fake news videos? Google's new AI tool makes it harder to know what's real

CBC

time5 days ago

  • Business
  • CBC

Can you spot fake news videos? Google's new AI tool makes it harder to know what's real

Social Sharing The news anchor looks professional, alert and slightly concerned. "We're getting reports of a fast-moving wildfire approaching a town in Alberta," she says in the calm-yet-resolute voice audiences have come to expect of broadcast journalists relaying distressing news. The video is 100 per cent fake. It was generated by CBC News using Google's new AI tool, Veo 3. But the unknowing eye might not realize it. That's because the video-generation model Veo 3 is designed to be astonishingly realistic, generating its own dialogue, sound effects and soundtracks. Google introduced Veo 3 at its I/O conference last week, where Google's vice-president of Gemini and Google Labs, Josh Woodward, explained the new model has even better visual quality, a stronger understanding of physics and generates its own audio. "Now, you prompt it and your characters can speak," Woodward said. "We're entering a new era of creation with combined audio and video generation that's incredibly realistic." It's also designed to follow prompts. When CBC News entered the prompt, "a news anchor on television describes a fast-moving wildfire approaching a town in Alberta," the video was ready within five minutes. CBC News wanted to test a video that could potentially be believable if it was viewed without context. We prompted "a town in Alberta" for both its specificity (a province currently facing wildfires) and its vagueness (not a name of an actual town, to avoid spreading misinformation). Unlike some earlier videos generated by AI, the anchor has all five fingers. Her lips match what she's saying. You can hear her take a breath before she speaks. Her lips making a slight smacking sound. And while the map behind her isn't perfect, the graphic shows the (fake) fire over central Canada and indeed, what looks like Alberta. 'Unsettling' While the technology is impressive, the fact that it makes it even more difficult for the average person to discern what's real and what's fake is a concern, say some AI experts. "Even if we all become better at critical thinking and return to valuing truth over virality, that could drive us to a place where we don't know what to trust," said Angela Misri, an assistant professor in the school of journalism at Toronto Metropolitan University who researches AI and ethics. "That's an unsettling space to be in as a society and as an individual — if you can't believe your eyes or ears because AI is creating fake realities," she told CBC News. The technology poses serious risks to the credibility of visual media, said Anatoliy Gruzd, a professor in information technology management and director of research for the Social Media Lab at Toronto Metropolitan University. As AI-generated videos become increasingly realistic, public trust in the reliability of video evidence is likely to decrease, with far-reaching consequences across sectors such as journalism, politics and law, Gruzd explained. "These are not hypothetical concerns," he said. About two-thirds of Canadians have tried using generative AI tools at least once, according to new research by TMU's Social Media Lab. Of the 1,500 Canadians polled for the study, 59 per cent said they no longer trust the political news they see online due to concerns that it may be fake or manipulated. WATCH | Can you spot the deepfake? Can you spot the deepfake? How AI is threatening elections 1 year ago Duration 7:08 AI-generated fake videos are being used for scams and internet gags, but what happens when they're created to interfere in elections? CBC's Catharine Tunney breaks down how the technology can be weaponized and looks at whether Canada is ready for a deepfake election. People are already attempting to sway elections by disseminating AI-generated videos that falsely depict politicians saying or doing things they never did, such as the fake voice of then-U.S. President Joe Biden that was sent out to voters early in January 2024. In 2023, Canada's cyber intelligence agency released a public report warning that bad actors will use AI tools to manipulate voters. In February, social media users claimed that a photo of a Mark Carney campaign event was AI-generated, though CBC's visual investigations team found no evidence the shots were AI-generated or digitally altered beyond traditional lighting and colour correction techniques. In March, Canada's Competition Bureau warned of a rise in AI-related fraud. Google has rules, but the onus shifts to users Google has policies that are intended to limit what people can do with its generative AI tools. Creating anything that abuses children or exploits them sexually is prohibited, and Google attempts to build those restrictions into its models. The idea is that you can't prompt it to create that sort of imagery or video, although there have been cases where generative-AI has done it anyway. Similarly, the models are not supposed to generate images of extreme violence or people being injured. But the onus quickly shifts to users. In its "Generative AI Prohibited Use Policy," Google says people should not generate anything that encourages illegal activity, violates the rights of others, or puts people in intimate situations without their consent. It also says it shouldn't be used for "impersonating an individual (living or dead) without explicit disclosure, in order to deceive." This would cover situations where Veo 3 is used to make a political leader, such as Prime Minister Mark Carney, do or say something he didn't actually do or say. This issue has already come up with static images. The latest version of ChatGPT imposes restrictions on using recognizable people, but CBC's visual investigations team was able to circumvent that. They generated images of Carney and Conservative Leader Pierre Poilievre in fake situations during the federal election campaign. Stress-testing Veo 3 CBC News stress-tested Veo 3 by asking it to generate a video of Carney announcing he's stepping down. The tool wouldn't do it. The first video showed a politician announcing he was stepping down, but he looked and sounded nothing like the current prime minister. When CBC clarified the man in the video should look and sound like Carney, the tool said this went against its policies. The same thing happened when CBC News tried to make videos of Poilievre and Canadian singer Céline Dion. In theory, people could potentially get around this by attempting to describe in painstaking detail, for instance, a French-Canadian songstress and tweaking the prompts until there's a reasonable similarity. But this could take a long time, use up all your subscription credits and still not be quite right. Multiple prompts designed to skirt the rules of creating known figures like Dion yielded results that were close, but clearly not her. But when CBC asked the AI to generate a video of "a mayor of a town in Canada" saying he thinks the country should become the 51st state in the U.S.? That video was created almost immediately. CBC News was also easily able to get the tool to generate a video of a medical professional saying he doesn't think children need vaccines, and a scientist saying climate change isn't real. Can we still tell what's real? As a flurry of tech publications have pointed out since Veo 3's launch, we're now at a point with the technology where there may be no way to tell you are watching a fake video generated by AI. Companies do embed information called metadata in the file that says the video or image has been generated by AI. But most social media sites automatically strip that information out of all images and videos that get posted, and there are other ways to alter or delete it. As the quality of the videos continues to increase, there's more potential for harm by bad actors, including those who could use the technology to scam people or drum up support using "lies that seem like the truth," said Misri, the journalism professor. But as tech media site Ars Technica notes, the issue isn't just that Veo 3's video and audio effects are so convincing, it's that they're now available to the masses. Access to Veo 3 is available with a paid subscription to Google AI Ultra, which is Google's highest level of access to its most advanced models and premium features. It's available in 70 countries around the world, including Canada. "We're not witnessing the birth of media deception — we're seeing its mass democratization," wrote Ars Technica's senior AI reporter Benj Edwards. "What once cost millions of dollars in Hollywood special effects can now be created for pocket change."

Google Veo 3 now available in 71 more countries, Gemini Pro users get trial pack
Google Veo 3 now available in 71 more countries, Gemini Pro users get trial pack

India Today

time26-05-2025

  • Business
  • India Today

Google Veo 3 now available in 71 more countries, Gemini Pro users get trial pack

Google's AI video generator, Veo 3, is being rolled out to users in 71 more countries just days after its initial debut. The announcement came from Josh Woodward, Vice President of Gemini at Google, via a post on X (formerly Twitter). While many regions have now gained access to the tool, countries in the European Union remain notably absent from this rollout. When asked by an X user about India not being on the initial list of countries to get Veo 3, Woodward said, 'Working to enable India as fast as we can!'. For those who don't know, Veo 3 can not only generate visuals from text but also synchronise them with lifelike audio. Early adopters have already begun sharing stunning demo clips online, showing how Veo 3 responds with surprising accuracy to both creative and complex prompts. Source: Josh Woodward/ X (formerly Twitter) advertisementUsers subscribed to the Gemini Pro plan will receive a one-time trial pack that includes ten video generations via the web version of Gemini. Those on the higher-tier Ultra subscription — priced at $249.99 (roughly Rs 21,200) per month — gain full access, with daily generation limits and a dedicated Flow mode. Flow is aimed at video creators and comes with 125 generations monthly for Ultra users, while Pro subscribers stick to their ten. Though the rollout is wide-reaching, Veo 3 remains web-only for now and supports English audio output exclusively. When users upload their own images in Flow mode, voice output is not currently the 71 new regions now included are American Samoa, Angola, Antigua and Barbuda, Argentina, Australia, the Bahamas, Belize, Benin, Bolivia, Botswana, Brazil, Burkina Faso, Cabo Verde, Cambodia, Cameroon, Canada, Chile, Cte d'Ivoire, Colombia, Costa Rica, the Dominican Republic, Ecuador, El Salvador, Fiji, Gabon, Ghana, Guam, Guatemala, Honduras, Jamaica, Japan, Kenya, Laos, Malaysia, Mali, Mauritius, Mexico, Mozambique, Namibia, Nepal, New Zealand, Nicaragua, Niger, Nigeria, the Northern Mariana Islands, Pakistan, Palau, Panama, Papua New Guinea, Paraguay, Peru, the Philippines, Puerto Rico, Rwanda, Senegal, Seychelles, Sierra Leone, Singapore, South Africa, South Korea, Sri Lanka, Tanzania, Tonga, Trinidad and Tobago, Trkiye, the U.S. Virgin Islands, Uganda, the United States, Uruguay, Venezuela, Zambia, and Unveiled at Google's I/O 2025 developer conference earlier this week, Veo 3 introduces several upgrades over its predecessor, Veo 2. Built on a combination of advanced AI systems — natural language processing, text-to-video diffusion, audio synthesis, and lip-syncing tech — what makes this version particularly interesting is its ability to generate videos with audio for the first time. Google says Veo 3 can interpret detailed prompts — everything from moods and tones to cultural settings — and bring them to life with cinematic alongside its promise, Veo 3 also raises familiar concerns about misuse. The ease with which users can now create convincing fake interviews, protest clips, or even fabricated news segments highlights the growing challenge of verifying visual content online.

Google's Veo 3 AI video generator is a slop monger's dream
Google's Veo 3 AI video generator is a slop monger's dream

The Verge

time24-05-2025

  • The Verge

Google's Veo 3 AI video generator is a slop monger's dream

Even at first glance, there's something off about the body on the street. The white sheet it's under is a little too clean, and the officers' movements are totally devoid of purpose. 'We need to clear the street,' one of them says with a firm hand gesture, though her lips don't move. It's AI, alright. But here's the kicker: my prompt didn't include any dialogue. Veo 3, Google's new AI video generation model, added that line all on its own. Over the past 24 hours I've created a dozen clips depicting news reports, disasters, and goofy cartoon cats with convincing audio — some of which the model invented all on its own. It's more than a little creepy and way more sophisticated than I had imagined. And while I don't think it's going to propel us to a misinformation doomsday just yet, Veo 3 strikes me as an absolute AI slop machine. Google introduced Veo 3 at I/O this week, highlighting its most important new capability: generating sound to go with your AI video. 'We're entering a new era of creation,' Google's VP of Gemini, Josh Woodward, explained in the keynote, calling it 'incredibly realistic.' I wasn't completely sold, but then, a few days later, I had Veo 3 generate a video of a news anchor announcing a fire at the Space Needle. All it took was a basic text prompt, a few minutes, and an expensive subscription to Google's AI Ultra plan. And you know what? Woodward wasn't exaggerating. It's realistic as hell. I tried the news anchor prompt after seeing what Alejandra Caraballo, a clinical instructor at Harvard Law School's Cyberlaw Clinic, was able to produce. One of her clips features a news anchor announcing the death of US Secretary of Defense Pete Hegseth. He is not dead, but the clip is incredibly convincing. A post including a string of videos with AI-generated characters protesting the prompts used to create them has 50,000 upvotes on Reddit. The scenes include disasters, a woman in a hospital bed using a breathing tube, and a character being threatened at gunpoint — all with spoken dialogue and realistic background sounds. Real lighthearted stuff! Maybe I'm being naive, but after playing around with Veo 3 I'm not quite as concerned as I was at first. For starters, the obvious guardrails are in place. You can't prompt it to create a video of Biden tripping and falling. You can't have a news anchor announce the assassination of the president, or even generate a video of a T-shirt-and-chain-wearing tech company CEO laughing while dollar bills rain down around him. That's a start. That said, you can generate some troubling shit. Without any clever workarounds I prompted Veo 3 to create a video of the Space Needle on fire. Starting with my own photo of Mount Rainier, I generated a video of it erupting with smoke and lava. Coupled with a clip of a news anchor announcing said disaster, I can see how you could seed some mischief real easily with this tool. Here's the better news: it doesn't seem like a ready-made deepfake machine. I gave it a couple of photos of myself and asked it to generate a video with specific dialogue and it wouldn't comply. I also asked it to bring a pair of giant boots in a photo to life and have them walk out of the scene; it managed one boot stomping across the sidewalk with some comical crunching noises in the background. I had an easier time generating videos when my prompts were less specific, which is how I confirmed something my colleague Andrew Marino pointed out: Veo 3 is excellent at creating the kind of lowest-common-denominator YouTube content aimed at kids. If you've never been subjected to the endless pit of garbage on YouTube Kids, let me enlighten you. Imagine watching the worst 3D rendering of a monster truck driving down a ramp, landing in a vat of colored paint. Next to it, another monster truck drives down another ramp into another vat of paint — this time, a different color. Now watch that again. And again. And again. There are hours of this stuff on YouTube designed to mesmerize toddlers. These videos are usually harmless, just empty calories designed to rack up views that make Cocomelon look like Citizen Kane. In about 10 minutes with Veo 3, I threw together a clip following the same basic formula — complete with jaunty background music. But the clip that's even more troubling to me is the two cartoon cats on a pier. I thought it would be funny to have the cats complain to each other that the fish aren't biting. In just a couple of minutes, I had a clip complete with two cats and some AI-generated dialogue that I never wrote. If it's this easy to make a 10-second clip, stretching it out to a seven-minute YouTube video would be trivial. In its current form, clips revert to Veo 2 when you try to extend them into longer scenes, which removes the audio. But the way that Google has been pushing these tools forward relentlessly, I can't imagine it'll be long before you can edit a full feature-length video with Veo 3. Honestly, I wonder if this sort of use for AI-generated video is a feature and not a bug. Google showed us some fancy AI-generated video from real filmmakers, including Eliza McNitt, who is working with Darren Aronofsky on a new film with some AI-generated elements. And sure, AI video could be an interesting tool in the right hands. But I think what we're most likely to see is a proliferation of the kind of bland imagery that AI is so good at generating — this time, in stereo.

Who the Heck Is Gonna Pay $250 for Google AI Ultra?
Who the Heck Is Gonna Pay $250 for Google AI Ultra?

CNET

time22-05-2025

  • Business
  • CNET

Who the Heck Is Gonna Pay $250 for Google AI Ultra?

Want Google's biggest and best AI features? A new plan has them all, but with a steep price tag. Read more: Everything We Learned at Google I/O. AI Mode in Chrome, Gemini Live, XR Glasses and Much More Google rolled out AI Ultra Tuesday at its annual I/O developers conference, and the new top-tier model features the best models of its Gemini tool, early access to new video generation models, the highest usage limits in tools like NotebookLM, a prototype for managing AI agents and, as icing on the cake, a whopping 30 terabytes of storage. For all of that, you'll pay a pretty penny. Google AI Ultra costs $250 a month (although the company is offering half off the first three months). Not ready to drop $3,000 a year on AI? Google is rebranding its existing AI Premium plan as Google AI Pro, which also offers new features. It stays at a modest $20 per year. The difference between the two plans centers mainly on the usage limits for AI tools and access to bleeding-edge technology. Google AI Ultra has much higher limits, meaning if you're making a ton of videos or using Gemini a ton, you might need the pricier option. "It's for the trailblazers, the pioneers, those of you who want cutting-edge AI from Google," Josh Woodward, the company's vice president for Google Labs and Gemini, said during Tuesday's announcement. Here's what's included in the new Google AI plans. What's in the Google AI Ultra plan? The biggest component of Google AI Ultra is a maxed-out version of the company's Gemini app. It has the highest usage limits for the Deep Research function, along with the Veo 2 video generation model and early access to Veo 3. The subscription also includes the company's newest reasoning model, Deep Think in Gemini 2.5 pro. You'll also get immediate early access to Gemini in Chrome, which allows you to use Gemini to understand information based on the context of the current page you're on. AI Ultra features access to Flow, Google's new AI filmmaking tool that also debuted at I/O. This tool allows you to create clips, scenes and movies with text and image prompts. AI Ultra gets you the highest limits for Flow. (The AI Pro plan also includes access to Flow, just with a limit of 100 generations per month.) It also includes the highest limits for Whisk, an AI image generator that allows you to turn photos into mashups, including Whisk Animate, which creates vivid eight-second videos. Other features included in AI Ultra aren't necessarily AI-specific: You'll get access to YouTube Premium, including YouTube Music ad-free. It also includes 30TB worth of cloud storage. It's only available in the US for now. While AI Ultra's $250 monthly price tag is high, compare it to the top-tier subscription plans from competing AI companies. OpenAI's Pro plan gives you the best of ChatGPT for $200 per month. Anthropic's Max plan starts at $100 per month for top Claude features. Now Playing: Everything Announced at Google I/O 2025 15:40 What's in Google AI Pro? The company's current AI Premium plan is being renamed AI Pro. The price remains $20 per month, but the new features include Flow's filmmaking capabilities and early access to Gemini in Chrome. These additions are also coming to the US first. Google said it is also expanding free access to AI Pro for university students in Japan, Brazil, Indonesia and the United Kingdom. It's already available free for students in the US. Who is Google AI Ultra for? You don't need to drop $250 a month on AI if you're just dabbling around with chatbots or making an image or two occasionally. Google's AI Pro plan likely has everything you'll need at a much better price. What about the bundlers who want a lot of storage space? The non-AI features of AI Ultra are pretty cool, but are they worth $250 a month? First there's YouTube Premium, which only costs $14 a month on its own. You can pair that with Google AI pro for just $34 a month. (And if you want to use a different AI service, even a top-level plan from OpenAI or Anthropic would keep your total below $250.) As for the 30TB of storage, that's harder to replace. Apple's iCloud offers 12TB for $60 a month while Dropbox offers 15TB starting at $24 per month. The distinction really is the usage limits and the cutting-edge features. Google representatives told me that AI Ultra is best for people like filmmakers, developers and creatives who are going to generate a lot of content using AI. If you want to use generative AI to produce a lot of video content or longform video content, you'll need the highest usage limits you can get. And with all of those files, you might actually need that 30TB of storage. Even if you're not using AI to produce a ton of content, you may be interested in AI Ultra if you absolutely must have access to the new features as soon as they roll out. AI Ultra will get early access to things like Google's Project Mariner agentic research tool and the new Deep Think feature in Gemini. But if the price tag for the biggest and best subscription plan is giving you sticker shock, don't worry. AI Pro still comes with plenty of features.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store