logo
#

Latest news with #Ancestra

Engadget Podcast: Ancestra director Eliza McNitt defends AI as a creative tool
Engadget Podcast: Ancestra director Eliza McNitt defends AI as a creative tool

Engadget

time18-07-2025

  • Entertainment
  • Engadget

Engadget Podcast: Ancestra director Eliza McNitt defends AI as a creative tool

Eliza McNitt is no stranger to new media. Her 2017 project, Fistful of Stars , was a fascinating look at stellar birth in virtual reality, while her follow-up Spheres explored black holes and the death of stars. Now with her short film Ancestra , McNitt has tapped into Google's AI tools to tell a deeply personal story. Working with Google Deepmind and director Darren Aronofsky's studio Primordial Soup, McNitt used a combination of live-action footage and AI-generated media to tell the story of her own traumatic birth. The result is an uncanny dramatic short where the genuine emotion of the live-action performance wrestles agains the artificiality of AI imagery. The film begins when the lead's (Audrey Corsa, playing McNitt's mother) routine natal care appointment turns into an emergency delivery. From that point on we hear her opine on how her child and all living things in the universe are connected — evoking the poetic nature of Terrence Malick's films. We jump between Corsa's performance, AI footage and macro- and micro-photography. In the end, Corsa holds a baby that was inserted by Google's AI, using prompts that make it look like McNitt as an infant. To view this content, you'll need to update your privacy settings. Please click here and view the "Content and social-media partners" setting to do so. There's no escaping the looming shadow of Google's AI ambitions. This isn't just an art film — it's an attempt at legitimizing the use of AI tools through McNitt's voice. That remains a problem when Google's models, including Veo and other technology from DeepMind, have been trained on pre-existing content and copyrighted works. A prestigious short coming from Darren Aronofsky's production studio isn't enough to erase that original sin. "I was challenged to create an idea that could incorporate AI," McNitt said in an interview on the Engadget Podcast. "And so for me, I wanted to tell a really deeply personal story in a way that I had not been able to before... AI really offered this opportunity to access these worlds where a camera cannot go, from the cosmos to the inner world of being within the mother's womb." This embedded content is not available in your region. When it comes to justifying the use of AI tools, which at the moment can credibly be described as plagiaristic technology, McNitt says that's a decision every artist will have to make for themselves. In the case of Ancestra , she wanted to use AI to accomplish difficult work, like creating a computer generated infant that looked like her, based on photos taken by her father. She found that to be more ethical than bringing in a real newborn, and the results more convincing than a doll or something animated by a CG artist. "I felt the use of AI was really important for this story, and I think it's up to every artist to decide how they wanna use these tools and define that," she said. "That was something else for me in this project where I had to define a really strong boundary where I did not want actors to be AI actors, [they] had to be humans with a soul. I do not feel that an performance can be recreated by a machine. I do deeply and strongly believe that humanity can only be captured through human beings. And so I do think it's really important to have humans at the center of the stories." To that end, McNitt also worked with dozens of artists create the sound, imagery and AI media in Ancestra . There's a worry that AI video tools will let anyone plug in a few prompts and build projects out of low-effort footage, but McNitt says she closely collaborated with a team of DeepMind engineers who crafted prompts and sifted through the results to find the footage she was looking for. (We ran out of time before I could ask her about the environmental concerns from using generative AI, but at this point we know it requires a significant amount of electricity and water. That includes demands for training models as well as running them in cloud.) To view this content, you'll need to update your privacy settings. Please click here and view the "Content and social-media partners" setting to do so. "I do think, as [generative AI] evolves, it's the responsibility of companies to not be taking copyrighted materials and to respect artists and to set those boundaries, so that artists don't get taken advantage of," McNitt said, when asked about her thoughts on future AI models that compensate artists and aren't built on stolen copyrighted works. "I think that that's a really important part of our role as humans going forward. Because ultimately, These are human stories for other human beings. And so it's, you know, important that we are at the center of that." If you buy something through a link in this article, we may earn commission.

Ancestra actually says a lot about the current state of AI-generated videos
Ancestra actually says a lot about the current state of AI-generated videos

The Verge

time18-06-2025

  • Entertainment
  • The Verge

Ancestra actually says a lot about the current state of AI-generated videos

After watching writer / director Eliza McNitt's new short film Ancestra, I can see why a number of Hollywood studios are interested in generative AI. A number of the shots were made and refined solely with prompts, in collaboration with Google's DeepMind team. It's obvious what Darren Aronofsky's AI-focused Primordial Soup production house and Google stand to gain from the normalization of this kind of creative workflow. But when you sit down to listen to McNitt and Aronofsky talk about how the short came together, it is hard not to think about generative AI's potential to usher in a new era of ' content ' that feels like it was cooked up in a lab — and put scores of filmmakers out of work in the process. Inspired by the story of McNitt's own complicated birth, Ancestra zooms in on the life of an expectant mother (Audrey Corsa) as she prays for her soon-to-be-born baby's heart defect to miraculously heal. Though the short features a number of real actors performing on practical sets, Google's Gemini, Imagen, and Veo models were used to develop Ancestra 's shots of what's racing through the mother's mind and the tiny, dangerous hole inside of the baby's heart. Inside the mother's womb, we're shown Blonde -esque close-ups of the baby, whose heartbeat gradually becomes part of the film's soundtrack. And the woman's ruminations on what it means to be a mother are visualized as a series of very short clips of other women with children, volcanic explosions, and stars being born after the Big Bang — all of which have a very stock-footage-by-way-of-gen-AI feel to them. It's all very sentimental, but the message being conveyed about the power of a mother's love is cliched, particularly when it's juxtaposed with what is essentially a montage of computer-generated nature footage. Visually Ancestra feels like a project that is trying to prove how all of the AI slop videos flooding the internet are actually something to be excited about. The film is so lacking in fascinating narrative substance, though, that it feels like a rather weak argument in favor of Hollywood's rush to get to the slop trough while it's hot. As McNitt smash cuts to quick shots of different kinds of animals nurturing their young and close-ups of holes being filled in by microscopic organisms, you can tell that those visuals account for a large chunk of the film's AI underpinnings. They each feel like another example of text-to-video models' ability to churn out uncanny-looking, decontextualized footage that would be difficult to incorporate into fully produced film. But in the behind-the-scenes making-of video that Google shared in its announcement last week, McNitt speaks at length about how, when faced with the difficult prospect of having to cast a real baby, it made much more sense to her to create a fake one with Google's models. 'There's just nothing like a human performance and the kind of emotion that an actor can evoke,' McNitt explains. 'But when I wrote that there would be a newborn baby, I did not know the solution of how we would [shoot] that because you can't get a baby to act.' Filmmaking with infants poses all kinds of production challenges that simply aren't an issue with CGI babies and doll props. But going the gen AI route also presented McNitt with the opportunity to make her film even more personal by using old photos of herself as a newborn to serve as the basis for the fake baby's face. With a bit of fine-tuning, Ancestra 's production team was able to combine shots of Corsa and the fake baby to create scenes in which they almost, but not quite, appear to be interacting as if both were real actors. If you look closely in wider shots, you can see that the mother's hand seems to be hovering just above her child because the baby isn't really there. But the scene moves by so quickly that it doesn't immediately stand out, and it's far less ' AI-looking ' than the film's more fantastical shots meant to represent the hole in the baby's heart being healed by the mother's will. Though McNitt notes how 'hundreds of people' were involved in the process of creating Ancestra, one of the behind-the-scenes video's biggest takeaways is how relatively small the project's production team was compared to what you might see on a more traditional short film telling the same story. Hiring more artists to conceptualize and then craft Ancestra 's visuals would have undoubtedly made the film more expensive and time-consuming to finish. Especially for indie filmmakers and up-and-coming creatives who don't have unlimited resources at their disposal, those are the sorts of challenges that can be exceedingly difficult to overcome. But Ancestra also feels like a case study in how generative AI stands to eliminate jobs that once would have gone to people. The argument is often that AI is a tool, and that jobs will shift rather than be replaced. Yet it's hard to imagine studio executives genuinely believing in a future where today's VFX specialists, concept artists, and storyboarders have transitioned into jobs as prompt writers who are compensated well enough to sustain their livelihoods. This was a huge part of what drove Hollywood's film / TV actors and writers to strike in 2023. It's also why video game performers have been on strike for the better part of the past year, and it feels irresponsible to dismiss these concerns as people simply being afraid of innovation or resistant to change. In the making-of video, Aronofsky points out that cutting-edge technology has always played an integral role in the filmmaking business. You would be hard-pressed today to find a modern film or series that wasn't produced with the use of powerful digital tools that didn't exist a few decades ago. There are things about Ancestra 's use of generative AI that definitely make it seem like a demonstration of how Google's models could, theoretically and with enough high-quality training data, become sophisticated enough to create footage that people would actually want to watch in a theater. But the way Aronofsky goes stony-faced and responds 'not good' when one of Google's DeepMind researchers explains that Veo can only generate eight-second-long clips says a lot about where generative AI is right now and Ancestra as a creative endeavor. It feels like McNitt is telling on herself a bit when she talks about how the generative models' output influenced the way she wrote Ancestra. She says 'both things really informed each other,' but that sounds like a very positive way of spinning the fact that Veo's technical limitations required her to write dialogue that could be matched to a series of clips vaguely tied to the concepts of motherhood and childbirth. This all makes it seem like McNitt's core authorial intent, at times, had to be deprioritized in favor of working with whatever the AI models spat out. Had it been the other way around, Ancestra might have wound up telling a much more interesting there's very little about Ancestra 's narrative or, to be honest, its visuals that is so groundbreaking that it feels like an example of why Hollywood should be rushing to embrace this technology whole cloth. Films produced with more generative AI might be cheaper and faster to make, but the technology as it exists now doesn't really seem capable of producing art that would put butts in movie theaters or push people to sign up for another streaming service. And it's important to bear in mind that, at the end of the day, Ancestra is really just an ad meant to drum up hype for Google, which is something none of us should be rushing to do.

Hollywood isn't ready for AI. These people are diving in anyway
Hollywood isn't ready for AI. These people are diving in anyway

Los Angeles Times

time28-05-2025

  • Entertainment
  • Los Angeles Times

Hollywood isn't ready for AI. These people are diving in anyway

When filmmakers say they're experimenting with artificial intelligence, that news is typically received online as if they had just declared their allegiance to Skynet. And so it was when Darren Aronofsky — director of button-pushing movies including 'The Whale' and 'Black Swan' — last week announced a partnership with Google AI arm DeepMind to use the tech giant's capabilities in storytelling. Aronofsky's AI-focused studio Primordial Soup is producing three short movies from emerging filmmakers using Google tools, including the text-to-video model Veo. The first film, 'Ancestra,' directed by Eliza McNitt, will premiere at the Tribeca Festival on June 13, the Mountain View-based search giant said. Google's promotional materials take pains to show that 'Ancestra' is a live-action film made by humans and with real actors, though it's bolstered with effects and imagery — including a tiny baby holding a mother's finger — that were created with AI. The partnership was touted during Google's I/O developer event, where the company showed off the new Veo 3, which allows users to create videos that include sound effects, ambient noise and speech (a step up from OpenAI-owned competitor, Sora). The company also introduced its new Flow film creation tool, essentially editing software using Google AI functions. Google's push to court creative types coincides with a separate initiative to help AI technology overcome its massive public relations problem. As my colleague Wendy Lee wrote recently, the company is working with filmmakers including Sean Douglas and his famous father Michael Keaton to create shorts that aren't made with AI, but instead portray the technology in a less apocalyptic light than Hollywood is used to. Simply put, much of the public sees AI as a foe that will steal jobs, rip off your intellectual property, ruin your childhood, destroy the environment and possibly kill us all, like in 'The Terminator,' '2001: A Space Odyssey' and the most recent 'Mission: Impossible' movies. And Google, which is making a big bet by investing in AI, has a lot riding on changing that perception. There's a ways to go, including in the entertainment industry. Despite the allure of cost-savings, traditional studios haven't exactly dived headfirst into the AI revolution. They're worried about the legal implications of using models trained on troves of copyrighted material, and they don't want to anger the entertainment worker unions, which went on strike partly over AI fears just a couple years ago. The New York Times and others have sued OpenAI and its investor Microsoft, alleging copyright theft. Tech giants claim they are protected by 'fair use.' AI-curious studios are walking into a wild, uncharted legal landscape because of the amount of copyrighted material being mined to teach the models, said Dan Neely, co-founder of startup Vermillio, which helps companies and individuals protect their intellectual property. 'The major studios and most people are going to be challenged using this product when it comes to the output content that you can and cannot use or own,' Neely said by phone. 'Given that it contains vast quantities of copyrighted material, and you can get it to replicate that stuff pretty easily, that creates chaos for someone who's creating with it.' But while the legacy entertainment business remains largely skeptical of AI, many newer, digitally-native studios and creators are embracing it, whether their goals are to become the next Pixar or the next Mr. Beast. The New York Times recently profiled the animation startup Toonstar, which says it uses AI throughout its production process, including when sharpening storylines and lip-syncing. John Attanasio, a Toonstar founder, told the paper that leaning into the tech would make animation '80 percent faster and 90 percent cheaper than industry norms.' Jeffrey Katzenberg, the former leader of DreamWorks Animation, has given a similar estimate of the potential cost-savings for Hollywood cartoons. Anyone working in the traditional computer animation business would have to gulp at those projections, whether they turn out to be accurate or not. U.S. animation jobs have already been hammered by outsourcing. Now here comes automation to finish the job. (Disney's animated features cost well over $100 million to produce because they're made by real-life animators in America.) Proponents of AI will sometimes argue that the new technology isn't a replacement for human workers, but rather a tool to enhance creativity. Some are more blunt: Stop worrying about these jobs and embrace the future of uninhibited creation. For obvious reasons, workers are reluctant to buy into that line of thinking. More broadly, it's still unclear whether all the spending on the AI arms race will ultimately be worth the cost. Goldman Sachs, in a 2024 report, estimated that companies would invest $1 trillion in AI infrastructure — including data centers, chips and the power grid — in the coming years. But that same report raised questions about AI's ultimate utility. To be worth the gargantuan investment, the technology would have to be capable of solving far more complex problems than it does now, said one Goldman analyst in the report. In recent weeks, the flaws in the technology have crossed over into absurd territory: For example, by generating a summer reading list of fake books and legal documents polluted with serious errors and fabrications. Big spending and experimentation doesn't always pan out. Look at virtual reality, the metaverse and the blockchain. But some entertainment companies are experimenting with the tools and finding applications. Meta has partnered with horror studio Blumhouse and James Cameron's venture Lightstorm Vision on AI-related initiatives. AI firm Runway is working with Lionsgate. At a time when the movie industry is troubled in part due to the high cost of special effects, production companies are motivated to stay on top of advancing tech. One of the most common arguments in favor of giving in to AI is that the technology will unshackle the next generation of creative minds. Some AI-enhanced content is promising. But so far AI video tools have produced a remarkable amount of content that looks the same, with its oddly dreamlike sheen of unreality. That's partly because the models are trained on color-corrected imagery available on the open internet or on YouTube. Licensing from the studios could help with that problem. The idea of democratizing filmmaking through AI may sound good in theory. However, there are countless examples in movie history — including 'Star Wars' and 'Jaws' — of how having physical and budgetary restrictions are actually good for art, however painful and frustrating they may have been during production. Even within the universe of AI-assisted material, the quality will vary dramatically depending on the talent and skill of people using it. 'Ultimately, it's really hard to tell good stories,' Neely said. 'The creativity that defines what you prompt the machine to do is still human genius — the best will rise to the top.' Like other innovations, the technology will improve with time, as the new Google tools show. Both Veo 3 and Flow showcase how AI is becoming better and easier to use, though they are still not quite mass-market products. For its highest tier, Google is charging $250 a month for its suite of tools. Maybe the next Spielberg will find their way through AI-assisted video, published for free on YouTube. Perhaps Sora and Veo will have a moment that propels them to mainstream acceptance in filmmaking, as 'The Jazz Singer' did for talkies. But those milestones still feel a long way off. The Memorial Day weekend box office achieved record revenue (not adjusting for inflation) of $329.8 million in the U.S. and Canada, thanks to the popularity of Walt Disney Co.'s 'Lilo & Stitch' and Paramount's 'Mission: Impossible — The Final Reckoning.' Disney's live-action remake generated $183 million in domestic ticket sales, exceeding pre-release analyst expectations, while the latest Tom Cruise superspy spectacle opened with $77 million. The weekend was a continuation of a strong spring rebound for theaters. Revenue so far this year is now up 22% versus 2024, according to Comscore. This doesn't mean the movie business is saved, but it does show that having a mix of different kinds of movies for multiple audiences is healthy for cinemas. Upcoming releases include 'Karate Kid: Legends,' 'Ballerina,' 'How to Train Your Dragon' and a Pixar original, 'Elio.' 'Lilo & Stitch' is particularly notable, coming after Disney's previous live-action redo, 'Snow White,' bombed in theaters. While Snow White has an important place in Disney history, Stitch — the chaotic blue alien — has quietly become a hugely important character for the company, driving enormous merchandise sales over the years. The 2002 original wasn't a huge blockbuster, coming during an awkward era for Walt Disney Animation, but the remake certainly is. Watch: Prepping for the new 'Naked Gun' by rewatching the classic and reliving the perfect Twitter meme. Listen: My favorite episode of 'Blank Check with Griffin & David' in a long time — covering Steven Spielberg's 'Hook' with Lin-Manuel Miranda.

Darren Aronofsky Partners with Google DeepMind on Generative AI Short Film Initiative
Darren Aronofsky Partners with Google DeepMind on Generative AI Short Film Initiative

Yahoo

time20-05-2025

  • Entertainment
  • Yahoo

Darren Aronofsky Partners with Google DeepMind on Generative AI Short Film Initiative

Darren Aronofsky has launched a new generative AI storytelling venture in which he will partner with Google DeepMind to produce short films with Gen-AI and some of Google's newly announced tools. The venture is titled Primordial Soup, and its research team, along with three filmmakers, will produce short films integrating new tech and storytelling and has the mission statement of creating frameworks for AI's role in filmmaking and putting artists in the driver's seat of technological innovation. More from IndieWire Google Unveils Gen-AI Video Tool with Camera Controls, Consistent Character Design, and Even Sound The Cannes 2025 Films So Far Most Likely to End Up in the Oscar Race 'Filmmaking has always been driven by technology. After the Lumiere Brothers and Edison's ground-breaking invention, filmmakers unleashed the hidden storytelling power of cameras. Later technological breakthroughs — sound, color, VFX — allowed us to tell stories in ways that couldn't be told before. Today is no different. Now is the moment to explore these new tools and shape them for the future of storytelling,' Aronofsky said in an official statement. The news was announced alongside Google's I/O event, in which the tech giant also unveiled its latest generative video model called Veo 3, as well as an advanced new gen-AI editing tool called Flow. Google DeepMind will provide Primordial Soup's team early access to these tools. The first film produced under the partnership is called 'Ancestra' and is directed by Eliza McNitt. Her film will premiere at the Tribeca Film Festival on June 13, and it will be followed by a panel featuring the filmmakers and moderated by Aronofsky. The film blends live-action filmmaking and performance with generative visuals and is described as a deeply personal narrative inspired by the day McNitt was born. McNitt trained the AI models on her own baby pictures and other photos taken by her late father in order to generate a newborn infant with a story that could be shaped by her own biography. 'With 'Ancestra,' I was able to visualize the unseen, transforming family archives, emotions, and science into a cinematic experience that feels both intimate and expansive,' said McNitt. McNitt is known for a previous VR experience that was executive produced by Aronofsky that featured the voices of Millie Bobby Brown, Jessica Chastain, and Patti Smith as the voices of the cosmos. It was the first VR project to ever be acquired out of Sundance. Two additional films, yet to be announced, will explore other new applications of Veo, Google DeepMind's video generation model. Watch the first teaser for 'Ancestra' above, and learn more about this strategic partnership at the Google DeepMind blog. Best of IndieWire Guillermo del Toro's Favorite Movies: 56 Films the Director Wants You to See 'Song of the South': 14 Things to Know About Disney's Most Controversial Movie The 55 Best LGBTQ Movies and TV Shows Streaming on Netflix Right Now

Darren Aronofsky joins AI Hollywood push with Google deal
Darren Aronofsky joins AI Hollywood push with Google deal

Los Angeles Times

time20-05-2025

  • Entertainment
  • Los Angeles Times

Darren Aronofsky joins AI Hollywood push with Google deal

Director Darren Aronofsky has pushed artistic boundaries with movies including 'Requiem for a Dream' and 'Mother!' Now his production company is working with Google to explore the edge of artificial intelligence technology in filmmaking. Google on Tuesday said it is working with several filmmakers to use new AI tools as part of a larger push to popularize the fast-moving tech. That effort includes a partnership with Aronofsky's venture, Primordial Soup. Google's AI-focused subsidiary DeepMind and Aronofsky's firm will work with three filmmakers, giving them access to the Mountain View, Calif.-based giant's text-to-video tool Veo, which they will use to make short films. The first project, 'Ancestra,' is directed by Eliza McNitt. Aronofsky is an executive producer on the film. 'Ancestra,' which premieres at the Tribeca Festival next month, combines live-action filmmaking with imagery generated with AI, such as cosmic events and microscopic worlds. 'Filmmaking has always been driven by technology,' Aronofsky said in a statement that referenced film tech pioneers the Lumiere brothers and Thomas Edison. 'Today is no different. Now is the moment to explore these new tools and shape them for the future of storytelling.' The push comes as Google and other companies are making deals with Hollywood talent and production companies to use their AI tools. For example, Facebook parent company Meta is partnering with 'Titanic' director James Cameron's venture, Lightstorm Vision, to co-produce content for its virtual reality headset Meta Quest. New York-based AI startup Runway has a deal with 'Hunger Games' studio Lionsgate to create a new AI model to help with behind-the-scenes processes such as storyboarding. Many people in Hollywood have been critical of AI tools, raising concerns about the automation of jobs. Writers worry about AI models being trained on their scripts without their permission or compensation. Tech industry executives have said that they should be able to train AI models with content available online under the 'fair use' doctrine, which allows for the limited reproduction of material without permission from the copyright holder. Proponents of the technology say that it can provide more opportunities for filmmakers to test out ideas and show a variety of visuals at a lower cost. New York-based Primordial Soup said in a press release that Google's AI tools helped solve 'practical challenges such as filming with infants and visualizing the birth of the universe' in 'Ancestra.' 'With 'Ancestra,' I was able to visualize the unseen, transforming family archives, emotions, and science into a cinematic experience that feels both intimate and expansive,' McNitt said in a statement. The two additional filmmakers and films participating in the Google DeepMind-Primordial Soup deal are not yet named. Google made the announcement as part of its annual I/O developer conference in Mountain View. During the event's keynote address on Tuesday, Google shared updates on its AI tools for filmmakers, including Veo 3, which allows creators to type in how they want dialogue to sound and add sound effects. The company also unveiled a new AI filmmaking tool called Flow that helps users create cinematic shots and stitch together scenes into longer films and short stories. 'This opens up a whole new world of possibilities,' said Demis Hassabis, chief executive of Google DeepMind, in a news briefing on Monday. 'We're excited for how our models are helping power new tools for creativity.' Flow is available through Google's new $249.99 monthly subscription plan Google AI Ultra, which includes early access to Veo 3, as well as other benefits including YouTube Premium, Google's AI models Gemini and other tools. Flow is also available with a $19.99-a-month Google AI Pro subscription. Google is making other investments related to AI. On Tuesday, L.A.-based generative AI studio Promise announced Google AI Futures Fund as one of its new strategic investors. Through the partnership, Promise will integrate some of Google's AI technologies into its production pipeline and workflow software and collaborate with Google's AI teams.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store