
AI video becomes more convincing, rattling creative industry
To measure the progress of AI video, you need only look at Will Smith eating spaghetti.
Since 2023, this unlikely sequence – entirely fabricated – has become a technological benchmark for the industry.
Two years ago, the actor appeared blurry, his eyes too far apart, his forehead exaggeratedly protruding, his movements jerky, and the spaghetti didn't even reach his mouth.
The version published a few weeks ago by a user of Google's Veo 3 platform showed no apparent flaws whatsoever.
'Every week, sometimes every day, a different one comes out that's even more stunning than the next,' said Elizabeth Strickler, a professor at Georgia State University.
Between Luma Labs' Dream Machine launched in June 2024, OpenAI's Sora in December, Runway AI's Gen-4 in March 2025, and Veo 3 in May, the sector has crossed several milestones in just a few months.
Runway has signed deals with Lionsgate studio and AMC Networks television group.
Lionsgate vice president Michael Burns told New York Magazine about the possibility of using artificial intelligence to generate animated, family-friendly versions from films like the 'John Wick' or 'Hunger Games' franchises, rather than creating entirely new projects.
'Some use it for storyboarding or previsualization' – steps that come before filming – 'others for visual effects or inserts,' said Jamie Umpherson, Runway's creative director.
Burns gave the example of a script for which Lionsgate has to decide whether to shoot a scene or not.
To help make that decision, they can now create a 10-second clip 'with 10,000 soldiers in a snowstorm.'
That kind of pre-visualization would have cost millions before.
In October, the first AI feature film was released – 'Where the Robots Grow' – an animated film without anything resembling live action footage.
For Alejandro Matamala Ortiz, Runway's co-founder, an AI-generated feature film is not the end goal, but a way of demonstrating to a production team that 'this is possible.'
'Resistance everywhere'
Still, some see an opportunity.
In March, startup Staircase Studio made waves by announcing plans to produce seven to eight films per year using AI for less than $500,000 each, while ensuring it would rely on unionized professionals wherever possible.
'The market is there,' said Andrew White, co-founder of small production house Indie Studios.
People 'don't want to talk about how it's made,' White pointed out. 'That's inside baseball. People want to enjoy the movie because of the movie.'
But White himself refuses to adopt the technology, considering that using AI would compromise his creative process.
Jamie Umpherson argues that AI allows creators to stick closer to their artistic vision than ever before, since it enables unlimited revisions, unlike the traditional system constrained by costs.
'I see resistance everywhere' to this movement, observed Georgia State's Strickler.
This is particularly true among her students, who are concerned about AI's massive energy and water consumption as well as the use of original works to train models, not to mention the social impact.
But refusing to accept the shift is 'kind of like having a business without having the internet,' she said. 'You can try for a little while.'
In 2023, the American actors' union SAG-AFTRA secured concessions on the use of their image through AI.
Strickler sees AI diminishing Hollywood's role as the arbiter of creation and taste, instead allowing more artists and creators to reach a significant audience.
Runway's founders, who are as much trained artists as they are computer scientists, have gained an edge over their AI video rivals in film, television, and advertising.
But they're already looking further ahead, considering expansion into augmented reality and virtual reality – for example creating a metaverse where films could be shot.
'The most exciting applications aren't necessarily the ones that we have in mind,' said Umpherson. 'The ultimate goal is to see what artists do with technology.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Express Tribune
a day ago
- Express Tribune
‘Hunger Games: Sunrise on the Reaping' prequel casts Iris Apatow and Edvin Ryding in key roles
Lionsgate has announced major casting updates for The Hunger Games: Sunrise on the Reaping, with Iris Apatow and Edvin Ryding joining the anticipated prequel as new faces in the Capitol. The film, set for release in November 2026, centres on a young Haymitch Abernathy, years before he becomes Katniss Everdeen's drunken mentor and takes audiences back to the 50th Hunger Games. Apatow, known for her roles in The Bubble and Love, will portray Proserpina Trinket, a Capitol stylist who appears to share more than a surname with Effie. Edvin Ryding, best recognised for his breakout in Netflix's Young Royals, steps in as Vitus, Haymitch's image strategist during his time in the Games. Their characters form part of the prep team tasked with turning Haymitch into a presentable tribute for the deadly arena. Joining them are Jax Guerrero as Tibby and Sandra Förster as Hersilia, both playing original characters created for the film. Joseph Zada, already confirmed to play young Haymitch, leads the cast. Sunrise on the Reaping explores the events of the 50th Hunger Games, also known as the Second Quarter Quell, which forced each district to send double the usual number of tributes. The film promises a grittier look into how the Capitol manufactures spectacle and trauma. Fans can expect an origin story that delves into how Haymitch's cynicism and survival instincts were forged in blood and betrayal. With Lionsgate banking on a successful return to Panem after the modest success of The Ballad of Songbirds and Snakes, this cast announcement signals a bold move to blend rising Gen Z talent with an established franchise. As excitement builds, eyes now turn to how this new ensemble will reinvent a familiar dystopia for a new generation.


Express Tribune
a day ago
- Express Tribune
Opal AI test-launched by Google: Here's how it can generate apps without coding, with a simple prompt
Google is piloting a new AI-powered coding tool called Opal, designed to help users build mini web applications by describing what they want to create in plain language. The tool, now accessible in the United States via Google Labs, is the latest in a growing wave of so-called "vibe-coding" platforms targeting both developers and non-technical users, as reported by TechCrunch. Opal enables users to either start from scratch using text-based instructions or customise existing apps from a shared gallery. Once a prompt is submitted, Google's internal models generate the application's structure. Users are then presented with a visual workflow panel that outlines each step in the process - including input, output, and generation phases. 🚨 NEW LABS EXPERIMENT 🚨 Introducing Opal, our new way to help you build and share AI mini-apps by linking together prompts, models, and tools— all while using simple, natural language (without a single line of code 🤯) Now available in US-only public beta! Learn more ⬇️… — Google Labs (@GoogleLabs) July 24, 2025 Each stage can be clicked to reveal and edit the underlying prompt, while additional steps can be added manually through Opal's toolbar. Finished applications can be published online and shared via a direct link, with others able to test the app using their own Google accounts. While Google already offers an AI Studio for developers to build apps through prompt engineering, Opal appears to mark a broader push towards accessibility and design-driven prototyping. Its visual-first interface and ease of use suggest the company is aiming to appeal to hobbyists, creatives, and aspiring app builders without coding experience. The move brings Google into direct competition with companies such as Canva, Figma, and Replit, all of which have introduced tools that lower the barrier to entry for software creation. Startups in the space, including Lovable and Cursor, have recently drawn significant investor and acquisition interest amid rising demand for generative AI solutions. Opal is currently in an experimental phase and forms part of Google's wider efforts to test emerging AI technologies through its Labs division.


Express Tribune
a day ago
- Express Tribune
Microsoft moves towards AI transformation following 9000 job cuts in 2025
Microsoft CEO Satya Nadella has confirmed the company's commitment to an AI transformation following the layoff of 9000 employees in July 2025. These cuts are part of a broader restructuring, with over 15,000 roles removed this year. Microsoft has stated it is not facing financial issues but is shifting priorities, with significant investment in AI infrastructure, including data centres and Copilot tools. In May, Microsoft laid off 6000 engineers and developers, while the July reductions focused on sales and support teams. The company is using third-party representatives and AI tools that currently write 30 per cent of its code, reducing the need for certain roles. Addressing the layoffs in a company-wide memo shared by Business Insider, Nadella described the decisions as 'among the most difficult'. He added that the company's overall headcount is 'relatively unchanged', despite the nearly four per cent workforce reduction. Nadella acknowledged the uncertainty created by these changes, stating, 'It might feel messy at times, but transformation always is.' He encouraged employees to view the shift positively, highlighting benefits such as 'more speed, more scale, more impact' and describing a future where AI researchers and coding assistants are readily accessible to employees. Microsoft has identified security, quality, and AI as its current priorities, with AI described as a key factor in the company's future operations. The move aligns with broader industry trends, with companies including Google, IBM, and Netmarble also adopting AI tools to restructure operations and reduce costs.