logo
Starbucks' Pumpkin Spice Latte is coming back early for fall — here's when you'll be able to get the popular drink

Starbucks' Pumpkin Spice Latte is coming back early for fall — here's when you'll be able to get the popular drink

New York Post21-07-2025
Fall is coming early to Starbucks.
The coffee giant is bringing the beloved Pumpkin Spice Latte back to menus nearly a month before fall even begins.
PSL fans will be able to get the iconic autumnal beverage starting Aug. 26, Starbucks announced.
4 Starbucks announced the Pumpkin Spice Latte will return August 26.
Starbucks
In addition, the fan-favorite Pumpkin Cream Cold Brew, Iced Pumpkin Cream Chai and Pecan Crunch Oatmilk Latte will return to the menu, as well as a new Pecan Cortado and new Italian Sausage Egg Bites.
Starbucks' fall-flavored at-home coffees and creamers are now available online and in grocery stores nationwide, including Pumpkin Spice, Fall Blend, and Smoked Butterscotch, as well as Maple Pecan Latte Inspired Non-Dairy Creamer and Pumpkin Spice Latte Inspired Creamer.
4 A new Pecan Cortado will also be hitting menus.
Starbucks
In early August, ready-to-drink Pumpkin Spice Flavored Coffee will also return to grocery and convenience stores nationwide, available in Iced Espresso, Frappuccino Chilled Coffee Drink, and Cold Brew Concentrate.
Even though we're far from fall, the timeline fits the range of dates when Starbucks has started PSL season in the past.
Google Trends data showed that consumer interest in pumpkin spice started earlier this year, with searches increasing starting in mid-June, compared to last year when the uptick in searches began in mid-July.
4 Starbucks' fall-flavored at-home coffees and creamers are already available online and in grocery stores nationwide.
Starbucks
4 In early August, ready-to-drink Pumpkin Spice Flavored Coffee will also be returning to grocery stores.
Starbucks
Starbucks first debuted the Pumpkin Spice Latte in 2003, sparking a national pumpkin craze.
The coffee chain says the PSL is its most popular seasonal beverage, made with real pumpkin flavor, spices, Starbucks Signature Espresso and steamed milk. It's available hot, iced or blended.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How the Shark FlexStyle has me skipping those trendy $25 blowouts: My review
How the Shark FlexStyle has me skipping those trendy $25 blowouts: My review

New York Post

time5 hours ago

  • New York Post

How the Shark FlexStyle has me skipping those trendy $25 blowouts: My review

New York Post may be compensated and/or receive an affiliate commission if you click or buy through our links. Featured pricing is subject to change. If there's one thing I know inside and out, it's beauty tools. My bathroom counter has seen more blow dryers, straighteners, diffusers, and curling gadgets than a salon during prom season. From $20 drugstore dryers to high-end unicorns that cost more than rent (hi, Dyson), I've tested them all. So, when I got my hands on the Shark FlexStyle air styling and drying system, I knew it was time to put this internet-famous multitasker to the ultimate test. Could it be the real deal, or just another overhyped beauty gizmo with a cute name and big promises? Now, before you ask: Yes, I've read all the Shark FlexStyle reviews. Yes, I've watched the TikToks. And yes, I've even written a full review comparing the Shark Flexstyle versus the Dyson Airwrap. That's why I'm going to get real about the Shark FlexStyle and if it's worth the investment — right now, right here. Advertisement Let's talk about the $25 blowout trend that's been all over TikTok. You know, the one where everyone's skipping Starbucks for a week just to get a professional-style blowout that somehow looks effortless and costs less than a mani. It's cute, it's fun, it's addictive — but if you're booking weekly appointments, that math adds up fast. That's where the Shark FlexStyle comes in to save your hair and your wallet. With the right Shark FlexStyle attachments (hello, oval brush and concentrator nozzle), you can recreate that salon-finish look right at home, minus the awkward small talk and the tip math. It gives you the bouncy, glossy, 'I have my life together' vibe of a professional blowout without having to leave your house or put on real pants. Honestly? The FlexStyle might just be the most economical beauty hack of the year. I tested the Shark FlexStyle diffuser on my natural wavy curls (hello, volume!), tried every attachment like my life depended on it, and even looked up how to clean the Shark FlexStyle so it doesn't turn into a lint graveyard. And don't worry, I'll spill everything you need to know, from pros and cons to tips for getting the best results, and yes, even if it qualifies as a true Shark FlexStyle dupe for the Dyson. So grab a coffee (or maybe your deep conditioner) and settle in. I'm about to give you the full scoop on the Shark FlexStyle. What makes the Shark Flexstyle different? First things first: this isn't just a Shark FlexStyle hair dryer. It's a full-blown transformation station for your hair. Think: blow dryer meets curling wand meets round brush meets diffuser…and then they all go to therapy and get along. Amazon Pros: Significantly more affordable than the Dyson Airwrap Comparable styling versatility to the Dyson Airwrap Lighter and easier to maneuver than the Dyson Airwrap Cons: Louder, with a high-pitched noise, compared to the Dyson Airwrap, though not a dealbreaker Not as powerful heat retention and speed settings, yet yields an extremely similar result The Shark FlexStyle attachments are designed to tackle everything from flat roots to frizzy curls to that one stubborn section that never wants to cooperate. Whether you're going full glam or just trying to fake a good hair day before a Zoom call, this little tool comes locked and loaded. And can we talk about the packaging for a sec? The Shark FlexStyle case is chic, functional and not just something I immediately throw under the sink. It's giving luxury, without the guilt. Plus, I snagged mine during an Amazon sale, which made unboxing it feel even sweeter. Because let's be honest: no one hates a deal, especially when you're getting salon-worthy results at a fraction of the price of that other Brand. Attachments: 2 Auto-Wrap Curlers (left and right), 1 Curl-Defining Diffuser, 1 Styling Concentrator (rotatable), 1 Oval Brush, 1 Paddle Brush, Storage/Carry Case | Wattage: 1,300 watts | Number of Settings: 3 adjustable levels | Cool Shot Button: Yes My Review Before and after using the Shark FlexStyle. Victoria Giardina When I first got my hands on the Shark FlexStyle, I wasn't expecting to love it as much as I do. I originally bought it as a more affordable alternative to the Dyson Airwrap, which I'd been using but, truthfully, wanted to compare both in a final verdict showdown. Shark's version is still an investment, don't get me wrong, but it comes in at almost half the cost of Dyson's and — surprisingly — doesn't feel like a compromise. From unboxing to the first styling session, it felt sleek, intuitive and powerful without being overwhelming. One of the things I really appreciate is the design. It's got a smart twist feature that lets you go from a traditional hair dryer to a styling wand with just a click. That flexibility (hence the name 'FlexStyle') means you can switch between drying and styling without juggling multiple tools. It's lightweight (around 1.5 pounds) and the controls are simple: three heat settings, three airflow settings and a cool shot. I also like that it doesn't get so hot that it fries your hair, which was a problem I had with older hair tools. Let's talk attachments because that's where the FlexStyle really shines. Mine came with five: two auto-wrap curlers (one for each direction), a paddle brush, an oval brush and a concentrator nozzle. If you have curly or coily hair, there's also a diffuser option in other bundles. The curlers are super similar to Dyson's; they use that same Coanda effect to attract and wrap hair around the barrel without clamping or burning. And they work well, especially on slightly damp hair. I found the paddle brush great for smoothing, and the oval brush gives some legit volume at the roots, kind of like a blowout. Now, how does it compare to the Dyson Airwrap? I've used both, and while the Dyson might win slightly on luxury feel and airflow smoothness, the Shark FlexStyle holds its own. The curls lasted just as long for me with both tools, and Shark even lets you manually change curl direction with two barrels — something Dyson didn't do until its newer model. Also, the Shark has more customizable bundles, so you can choose what fits your hair type instead of buying extra attachments you'll never use. Where I really noticed the difference was in the drying speed. The Shark is powerful and dries hair fast without getting too hot. If your hair is thick or takes ages to dry, you'll appreciate that. And even though it's powerful, it doesn't feel like a jet engine blasting your face, which is a plus. I also noticed my hair had noticeably less frizz and felt smoother after using the Shark, especially with the paddle brush. How to Clean the Shark FlexStyle With so many household chores, cleaning your hair tools may be the last item on your radar. However, it's part of maintenance (and good for your hair), and I find it incredibly easy to care for. To clean my Shark FlexStyle, I always start by unplugging it and letting it cool down completely. Safety first. Once it's cool, I detach all the attachments — whether I've used the styling concentrator, diffuser or round brush. I wipe down each piece with a soft, damp cloth to remove any product buildup or dust. If there's hair caught in the bristles of the brushes, I carefully pull it out using my fingers or a small comb. For the main body of the device, I use a dry microfiber cloth to gently clean the surface, making sure not to get any moisture near the electrical components. One thing I never forget is checking the filter. At the base of the handle, there's a filter cover that easily twists off. I remove it and gently tap out any lint or dust, then use a soft brush (sometimes even an old toothbrush) to clean the filter mesh. If it's really dirty, I'll rinse it with cool water and let it dry completely before putting it back. Once everything's clean and dry, I reassemble the device and store it in a safe, dry place. Keeping it clean not only makes it last longer, but it also helps maintain airflow and styling performance. The Final Verdict As someone with Goldilocks-level hair that's not too fine, nor too thick, the FlexStyle has become my go-to tool. But I've seen friends with thick, curly hair use it with the diffuser and paddle brush and get great results too. It's especially great if you want salon-style looks at home without spending a fortune or exposing your hair to extreme heat. Plus, for travelers, it's compact enough to throw in a carry-on, which I totally did on my last trip. Overall, I'd recommend the Shark FlexStyle to anyone who wants a versatile, high-performing hair tool without the Dyson price tag. It's not just a lookalike; it's a solid contender in its own right. Curling, smoothing, volumizing — it shows up, and shows up well. Even if I just need a fast, gentle blow-dry, this little multitasker really delivers. Honestly, I reach for it almost every time I do my hair now, and my old flat iron and curling wand are officially collecting dust. How I Tested Victoria Giardina Here's an overview of how I tested the Shark FlexStyle: Performance: I evaluated how effectively the Shark FlexStyle dried and styled hair compared to other hair tools that have been in my rotation. I evaluated how effectively the Shark FlexStyle dried and styled hair compared to other hair tools that have been in my rotation. Heat Control: I tested the consistency and safety of temperature settings to prevent hair damage. I tested the consistency and safety of temperature settings to prevent hair damage. Ergonomics: I assessed the weight, grip and ease of handling during use. I assessed the weight, grip and ease of handling during use. Noise Level: I measured the sound output to determine if the device operated quietly. I measured the sound output to determine if the device operated quietly. Durability: I checked the build quality and longevity after repeated use. This article was written by Victoria Giardina, New York Post Commerce Journalist & Content Strategist, who has spent countless hours researching, testing hundreds of products and comparing the latest makeup, skincare, hair and beauty items and trends to determine what's truly worth your hard-earned cash. She evaluates formulas, textures, ingredients and more, in addition to consulting medical and industry experts. Some of Victoria's latest conquests include testing the best vitamin C serums on the market, and a rinse-and-repeat review of the best shampoos of 2025. Victoria, who received a beauty industry essentials certification from the Fashion Institute of Technology, has been creating shopping guides for the New York Post since 2021 and previously held positions at Insider Reviews and CNN Underscored.

Mark Cuban-backed 'Shark Tank' beauty brand closes, no bankruptcy
Mark Cuban-backed 'Shark Tank' beauty brand closes, no bankruptcy

Miami Herald

time2 days ago

  • Miami Herald

Mark Cuban-backed 'Shark Tank' beauty brand closes, no bankruptcy

"Hi, I'm Fiona Co Chan. I'm from San Francisco, California, and I'm here seeking $400,000 for 5% equity in my company." That's how Co Chan started her appearance on "Shark Tank," which led to an unlikely outcome for her company, Youthforia. "Shark Tank," a show where businesses pitch shares in their companies to actual investors (the Sharks), has funded hundreds, maybe thousands of entrepreneurs, but the Sharks tend to stay in their lane. Related: Fast-growing coffee chain is coming for Starbucks' crown Kevin "Mr. Wonderful" O'leary, for example, has a wedding platform and while that may not seem like his niche, he often looks for companies that expand that portfolio. Lori Grenier tends to look for products that she can launch on the QVC shopping channel while Daymond John often leans into his experience manufacturing clothing. Mark Cuban has done a wide array of deals, but he tends to stay in the technology space, That's not the only area where he has invested, but he generally has avoided beauty companies because that's not exactly his brand. Co Chan, however, seemed to captivate him as she sold her beauty brand, Youthforia, as a tech company of sorts. "Our beauty products are bio-based, meaning we're using plant-based, renewable ingredients which are great for your skin and good for the planet. We are best known for our BYO Blush, the world's first color-changing blush oil that reacts to your skin's pH to give you the perfect shade," she shared. "You know, I don't know anything about this, but to Barbara's point, I was super curious because when you have something that's completely differentiated, you ride that and you don't do anything else because that just dilutes your efforts. And sometimes you shouldn't listen to your customers. When you have something unique, you play that edge," Cuban said. He then dropped out of the bidding. Cuban, however, never really went away and it's not uncommon for Sharks to drop out then come back in. "I wish I knew a lot about this space because I think this could be the real deal and you could use some guidance on it," Cuban added. He then jumped back in. "Would you do $400,000?...No, for me. Maybe my daughters will like this. I know they'll understand it, right? But I need more equity, right? Because you're getting three of us, Alyssa, Alexis, and Mark. And Tiffany, my wife, too. So 400,000 for 10%, no royalty," he offered. More Retail: Warren Buffett's Berkshire Hathaway dire retail warningsHuge retail chain nears Chapter 11 bankruptcy after harsh closureWalmart introduces mobile new store format for younger customers Co Chan countered asking if Cuban would do the deal at 8%. "I'll even go do demos," Cuban offered. O'Leary countered offered to take lower equity, but asking for a higher royalty, while Cuban did not ask for a royalty at all. "Just to prove a point, I'll do it at 8% no royalty," Cuban countered. Co Chan barely hesitated. "You've got a deal. Co Chan posted a message on Instagram that bluntly shared that Youthforia was being closed. "We're closing Youthforia," she wrote. "I just want to thank you from the bottom of my heart for all of your wonderful support over the past few years. It's been a dream to create such a beautiful brand - but unfortunately, I've made the hard decision to shut down Youthforia." Co Chan did not explain the decision. She did announce that the Youforia website was still open and that the company would be selling off its inventory in a 50% off sale. The website shared the news as well. "We're shutting down our business - thank you so much for the past few years. We appreciate you. No code needed. Shop makeup that acts like skincare," it posted on its homepage. The company is not accepting any returns and is offering free shipping on orders over $100. Youthforia's website still shares Chan's original mission prominently. "I believe that makeup should be an extension of your skincare and that means being able to fall asleep with makeup on without waking up feeling guilty," she wrote. Related: Department store giant offers big discounts to win back customers The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

AI is rewriting the rules of storytelling. Will Hollywood adapt or be left behind?
AI is rewriting the rules of storytelling. Will Hollywood adapt or be left behind?

Los Angeles Times

time4 days ago

  • Los Angeles Times

AI is rewriting the rules of storytelling. Will Hollywood adapt or be left behind?

At a Starbucks in downtown Culver City, Amit Jain pulls out his iPad Pro and presses play. On-screen, one of his employees at Luma AI — the Silicon Valley startup behind a new wave of generative video tools, which he co-founded and now runs — lumbers through the company's Palo Alto office, arms swinging, shoulders hunched, pretending to be a monkey. Jain swipes to a second version of the same clip. Same movement, same hallway, but now he is a monkey. Fully rendered and believable, and created in seconds. 'The tagline for this would be, like, iPhone to cinema,' Jain says, flipping through other uncanny clips shared on his company's Slack. 'But, of course, it's not full cinema yet.' He says it offhandedly — as if he weren't describing a transformation that could upend not just how movies are made but what Hollywood is even for. If anyone can summon cinematic spectacle with a few taps, what becomes of the place that once called it magic? Luma's generative AI platform, Dream Machine, debuted last year and points toward a new kind of moviemaking, one where anyone can make release-grade footage with a few words. Type 'a cowboy riding a velociraptor through Times Square,' and it builds the scene from scratch. Feed it a still photo and it brings the frozen moment to life: A dog stirs from a nap, trees ripple in the breeze. Dream Machine's latest tool, Modify Video, was launched in June. Instead of generating new footage, it redraws what's already there. Upload a clip, describe what you want changed and the system reimagines the scene: A hoodie becomes a superhero cape, a sunny street turns snowy, a person transforms into a talking banana or a medieval knight. No green screen, no VFX team, no code. 'Just ask,' the company's website says. For now, clips max out around 10 seconds, a limit set by the technology's still-heavy computing demands. But as Jain points out, 'The average shot in a movie is only eight seconds.' Jain's long-term vision is even more radical: a world of fully personalized entertainment, generated on demand. Not mass-market blockbusters, but stories tailored to each individual: a comedy about your co-workers, a thriller set in your hometown, a sci-fi epic starring someone who looks like you, or simply anything you want to see. He insists he's not trying to replace cinema but expand it, shifting from one-size-fits-all stories to something more personal, flexible and scalable. 'Today, videos are made for 100 million people at a time — they have to hit the lowest common denominator,' Jain says. 'A video made just for you or me is better than one made for two unrelated people. That's the problem we're trying to solve... My intention is to get to a place where two hours of video can be generated for every human every day.' It's a staggering goal that Jain acknowledges is still aspirational. 'That will happen, but when the prices are about a thousand times cheaper than where we are. Our research and our engineering are going toward that, to push the price down as much as humanly possible. Because that's the demand for video. People watch hours and hours of video every day.' Scaling to that level would require not just faster models but exponentially more compute power. Critics warn that the environmental toll of such expansion could be profound. For Dream Machine to become what Jain envisions, it needs more than generative tricks — it needs a built-in narrative engine that understands how stories work: when to build tension, where to land a joke, how to shape an emotional arc. Not a tool but a collaborator. 'I don't think artists want to use tools,' he says. 'They want to tell their stories and tools get in their way. Currently, pretty much all video generative models, including ours, are quite dumb. They are good pixel generators. At the end of the day, we need to build general intelligence that can tell a f— funny joke. Everything else is a distraction.' The name may be coincidental, but nine years ago, MIT's Media Lab launched a very different kind of machine: Nightmare Machine, a viral experiment that used neural networks to distort cheerful faces and familiar cityscapes into something grotesque. That project asked if AI could learn to frighten us. Jain's vision points in a more expansive direction: an AI that is, in his words, 'able to tell an engaging story.' For many in Hollywood, though, the scenario Jain describes — where traditional cinema increasingly gives way to fast, frictionless, algorithmically personalized video — sounds like its own kind of nightmare. Jain sees this shift as simply reflecting where audiences already are. 'What people want is changing,' he says. 'Movies obviously have their place but people aren't spending time on them as much. What people want are things that don't need their attention for 90 minutes. Things that entertain them and sometimes educate them and sometimes are, you know, thirst traps. The reality of the universe is you can't change people's behaviors. I think the medium will change very significantly.' Still, Jain — who previously worked as an engineer on Apple's Vision Pro, where he collaborated with filmmakers like Steven Spielberg and George Lucas — insists Hollywood isn't obsolete, just due for reinvention. To that end, Luma recently launched Dream Lab LA, a creative studio aimed at fostering AI-powered storytelling. 'Hollywood is the largest concentration of storytellers in the world,' Jain says. 'Just like Silicon Valley is the largest concentration of computer scientists and New York is the largest concentration of finance people. We need them. That's what's really special about Hollywood. The solution will come out of the marriage of technology and art together. I think both sides will adapt.' It's a hopeful outlook, one that imagines collaboration, not displacement. But not everyone sees it that way. In Silicon Valley, where companies like Google, OpenAI, Anthropic and Meta are racing to build ever more powerful generative tools, such thinking is framed as progress. In Hollywood, it can feel more like erasure — a threat to authorship itself and to the jobs, identities and traditions built around it. The tension came to a head during the 2023 writers' and actors' strikes, when picket signs declared: 'AI is not art' and 'Human writers only.' What once felt like the stuff of science fiction is now Hollywood's daily reality. As AI becomes embedded in the filmmaking process, the entire ecosystem — from studios and streamers to creators and institutions — is scrambling to keep up. Some see vast potential: faster production, lower costs, broader access, new kinds of creative freedom. Others see an extraction machine that threatens the soul of the art form and a coming flood of cheap, forgettable content. AI storytelling is just beginning to edge into theaters — and already sparking backlash. This summer, IMAX is screening 10 generative shorts from Runway's AI Film Festival. At AMC Burbank, where one screening is set to take place later this month, a protest dubbed 'Kill the Machine' is already being organized on social media, an early flashpoint in the growing resistance to AI's encroachment on storytelling. But ready or not, the gravity is shifting. Silicon Valley is pulling the film industry into its orbit, with some players rushing in and others dragged. Faced with consolidation, shrinking budgets and shareholder pressure to do more with less, studios are turning to AI not just to cut costs but to survive. The tools are evolving faster than the industry's playbook, and the old ways of working are struggling to keep up. With generative systems poised to flood the zone with content, simply holding an audience's attention, let alone shaping culture, is becoming harder than ever. While the transition remains uneven, some studios are already leaning in. Netflix recently used AI tools to complete a complex VFX sequence for the Argentine sci-fi series 'El Eternauta' in a fraction of the usual time. 'We remain convinced that AI represents an incredible opportunity to help creators make films and series better, not just cheaper,' co-chief executive Ted Sarandos told analysts during a July earnings call. At Paramount, incoming chief executive David Ellison is pitching a more sweeping transformation: a 'studio in the cloud' that would use AI and other digital tools to reinvent every stage of filmmaking, from previsualization to post. Ellison, whose Skydance Media closed its merger with Paramount Global this week and whose father, Larry Ellison, co-founded Oracle, has vowed to turn the company into a tech-first media powerhouse. 'Technology will transform every single aspect of this company,' he said last year. In one of the most visible examples of AI adoption in Hollywood, Lionsgate, the studio behind the 'John Wick' and 'Hunger Games' franchises, struck a deal last year with the generative video startup Runway to train a custom model on its film and TV library, aiming to support future project development and improve efficiency. Lionsgate chief executive Jon Feltheimer, speaking to analysts after the agreement, said the company believes AI, used with 'appropriate guardrails,' could have a 'positive transformational impact' on the business. Elsewhere, studios are experimenting more quietly: using AI to generate early character designs, write alternate dialogue or explore how different story directions might land. The goal isn't to replace writers or directors, but to inform internal pitches and development. At companies like Disney, much of the testing is happening in games and interactive content, where the brand risk is lower and the guardrails are clearer. For now, the prevailing instinct is caution. No one wants to appear as if they're automating away the heart of the movies. As major studios pivot, smaller, more agile players are building from the ground up for the AI era. According to a recent report by an L.A.-based innovation studio that helps launch and advise early-stage AI startups in entertainment, more than 65 AI-native studios have launched since 2022, most of them tiny, self-funded teams of five or fewer. At these studios, AI tools allow a single creator to do the work of an entire crew, slashing production costs by 50% to 95% compared with traditional live-action or animation. The boundaries between artist, technician and studio are collapsing fast — and with them, the very idea of Hollywood as a gatekeeper. That collapse is raising deeper questions: When a single person anywhere in the world can generate a film from a prompt, what does Hollywood still represent? If stories can be personalized, rendered on demand or co-written with a crowd, who owns them? Who gets paid? Who decides what matters and what disappears into the churn? And if narrative itself becomes infinite, remixable and disposable, does the idea of a story still hold any meaning at all? Yves Bergquist leads the AI in Media Project at USC's Entertainment Technology Center, a studio-backed think tank where Hollywood, academia and tech converge. An AI researcher focused on storytelling and cognition, he has spent years helping studios brace for a shift he sees as both inevitable and wrenching. Now, he says, the groundwork is finally being laid. 'We're seeing very aggressive efforts behind the scenes to get studios ready for AI,' Bergquist says. 'They're building massive knowledge graphs, getting their data ready to be ingested into AI systems and putting governance committees in place to start shaping real policy.' But adapting won't be easy, especially for legacy studios weighed down by entrenched workflows, talent relationships, union contracts and layers of legal complexity. 'These AI models weren't built for Hollywood,' Bergquist says. 'This is 22nd-century technology being used to solve 21st-century problems inside 19th-century organizational models. So it's blood, sweat and tears getting them to fit.' In an algorithmically accelerated landscape where trends can catch fire and burn out in hours, staying relevant is its own challenge. To help studios keep pace, Bergquist co-founded Corto, an AI startup that describes itself as a 'growth genomics engine.' The company, which also works with brands like Unilever, Lego and Coca-Cola, draws on thousands of social and consumer sources, analyzing text, images and video to decode precisely which emotional arcs, characters and aesthetics resonate with which demographics and cultural segments, and why. 'When the game is attention, the weapon is understanding where culture and attention are and where they're going.' Bergquist says, arguing media ultimately comes down to neuroscience. Corto's system breaks stories down into their formal components, such as tone, tempo, character dynamics and visual aesthetics, and benchmarks new projects against its extensive data to highlight, for example, that audiences in one region prefer underdog narratives or that a certain visual trend is emerging globally. Insights like these can help studios tailor marketing strategies, refine storytelling decisions or better assess the potential risk and appeal of new projects. With ever-richer audience data and advances in AI modeling, Bergquist sees a future where studios can fine-tune stories in subtle ways to suit different viewers. 'We might know that this person likes these characters better than those characters,' he says. 'So you can deliver something to them that's slightly different than what you'd deliver to me.' A handful of studios are already experimenting with early versions of that vision — prototyping interactive or customizable versions of existing IP, exploring what it might look like if fans could steer a scene, adjust a storyline or interact with a favorite character. Speaking at May's AI on the Lot conference, Danae Kokenos, head of technology innovation at Amazon MGM Studios, pointed to localization, personalization and interactivity as key opportunities. 'How do we allow people to have different experiences with their favorite characters and favorite stories?' she said. 'That's not quite solved yet, but I see it coming.' Bergquist is aware that public sentiment around AI remains deeply unsettled. 'People are very afraid of AI — and they should be,' he acknowledges. 'Outside of certain areas like medicine, AI is very unpopular. And the more capable it gets, the more unpopular it's going to be.' Still, he sees a significant upside for the industry. Get AI right, and studios won't just survive but redefine storytelling itself. 'One theory I really believe in is that as more people gain access to Hollywood-level production tools, the studios will move up the ladder — into multi-platform, immersive, personalized entertainment,' he says. 'Imagine spending your life in Star Wars: theatrical releases, television, VR, AR, theme parks. That's where it's going.' The transition won't be smooth. 'We're in for a little more pain,' he says, 'but I think we'll see a rebirth of Hollywood.' You don't have to look far to find the death notices. TikTok, YouTube and Reddit are full of 'Hollywood is dead' posts, many sparked by the rise of generative AI and the industry's broader upheaval. Some sound the alarm. Others say good riddance. But what's clear is that the center is no longer holding and no one's sure what takes its place. Media analyst Doug Shapiro has estimated that Hollywood produces about 15,000 hours of fresh content each year, compared to 300 million hours uploaded annually to YouTube. In that context, generative AI doesn't need to reach Hollywood's level to pose a major threat to its dominance — sheer volume alone is enough to disrupt the industry. The attention economy is maxed out but attention itself hasn't grown. As the monoculture fades from memory, Hollywood's cultural pull is loosening. This year's Oscars drew 19.7 million viewers, fewer than tuned in to a typical episode of 'Murder, She Wrote' in the 1990s. The best picture winner, 'Anora,' earned just $20 million at the domestic box office, one of the lowest tallies of any winner of the modern era. Critics raved, but fewer people saw it in theaters than watch the average moderately viral TikTok. Amid this fragmentation, generative AI tools are fueling a surge of content. Some creators have a new word for it: 'slop' — a catchall for cheap, low-effort, algorithmically churned-out media that clogs the feed in search of clicks. Once the world's dream factory, Hollywood is now asking how it can stand out in an AI-powered media deluge. Ken Williams, chief executive of USC's Entertainment Technology Center and a former studio exec who co-founded Sony Pictures Imageworks, calls it a potential worst-case scenario in the making — 'the kind of wholesale dehumanization of the creative process that people, in their darkest moments, fear.' Williams says studios and creatives alike worry that AI will trap audiences in an algorithmic cul de sac, feeding them more of what they already know instead of something new. 'People who live entirely in the social media world and never come out of that foxhole have lost the ability to hear other voices — and no one wants to see that happen in entertainment.' If the idea of uncontrolled, hyper-targeted AI content sounds like something out of an episode of 'Black Mirror,' it was. In the 2023 season opener 'Joan Is Awful,' a woman discovers her life is being dramatized in real time on a Netflix-style streaming service by an AI trained on her personal data, with a synthetic Salma Hayek cast as her on-screen double. So far, AI tools have been adopted most readily in horror, sci-fi and fantasy genres that encourage abstraction, stylization and visual surrealism. But when it comes to human drama, emotional nuance or sustained character arcs, the cracks start to show. Coherence remains a challenge. And as for originality — the kind that isn't stitched together from what's already out there — the results so far have generally been far from revelatory. At early AI film festivals, the output has often leaned toward the uncanny or the conceptually clever: brief, visually striking experiments with loose narratives, genre tropes and heavily stylized worlds. Many feel more like demos than fully realized stories. For now, the tools excel at spectacle and pastiche but struggle with the kinds of layered, character-driven storytelling that define traditional cinema. Then again, how different is that from what Hollywood is already producing? Today's biggest blockbusters — sequels, reboots, multiverse mashups — often feel so engineered to please that it's hard to tell where the algorithm ends and the artistry begins. Nine of the top 10 box office hits in 2024 were sequels. In that context, slop is, to some degree, in the eye of the beholder. One person's throwaway content may be another's creative breakthrough — or at least a spark. Joaquin Cuenca, chief executive of Freepik, rejects the notion that AI-generated content is inherently low-grade. The Spain-based company, originally a stock image platform, now offers AI tools for generating images, video and voice that creators across the spectrum are starting to embrace. 'I don't like this 'slop' term,' Cuenca says. 'It's this idea that either you're a top renowned worldwide expert or it's not worth it — and I don't think that's true. I think it is worth it. Letting people with relatively low skills or low experience make better videos can help people get a business off the ground or express things that are in their head, even if they're not great at lighting or visuals.' Freepik's tools have already made their way into high-profile projects. Robert Zemeckis' 'Here,' starring a digitally de-aged Tom Hanks and set in one room over a period for decades, used the company's upscaling tech to enhance backgrounds. A recently released anthology of AI-crafted short films, 'Beyond the Loop,' which was creatively mentored by director Danny Boyle, used the platform to generate stylized visuals. 'More people will be able to make better videos, but the high end will keep pushing forward too,' Cuenca says. 'I think it will expand what it means to be state of the art.' For all the concern about runaway slop, Williams envisions a near-term stalemate, where AI expands the landscape without toppling the kind of storytelling that still sets Hollywood apart. In that future, he argues, the industry's competitive edge — and perhaps its best shot at survival — will still come from human creators. That belief in the value of human authorship is now being codified by the industry's most influential institution. Earlier this year, the Academy of Motion Picture Arts and Sciences issued its first formal guidance on AI in filmmaking, stating that the use of generative tools will 'neither help nor harm' a film's chances of receiving a nomination. Instead, members are instructed to consider 'the degree to which a human was at the heart of the creative authorship' when evaluating a work. 'I don't see AI necessarily displacing the kind of narrative content that has been the province of Hollywood's creative minds and acted by the stars,' Williams says. 'The industry is operating at a very high level of innovation and creativity. Every time I turn around, there's another movie I've got to see.' Inside Mack Sennett Studios, a historic complex in L.A.'s Echo Park neighborhood once used for silent film shoots, a new kind of studio is taking shape: Asteria, the generative AI video studio founded by filmmaker-turned-entrepreneur Bryn Mooser. Asteria serves as the creative arm of Moonvalley, an AI storytelling company led by technologist and chief executive Naeem Talukdar. Together, they're exploring new workflows built around the idea that AI can expand, rather than replace, human creativity. Mooser, a two-time Oscar nominee for documentary short subject and a fifth-generation Angeleno, sees the rise of AI as part of Hollywood's long history of reinvention, from sound to color to CGI. 'Looking back, those changes seem natural, but at the time, they were difficult,' he says. What excites him now is how AI lowers technical barriers for the next generation. 'For people who are technicians, like stop-motion or VFX artists, you can do a lot more as an individual or a small team,' he says. 'And really creative filmmakers can cross departments in a way they couldn't before. The people who are curious and leaning in are going to be the filmmakers of tomorrow.' It's a hopeful vision, one shared by many AI proponents who see the tools as a great equalizer, though some argue it often glosses over the structural realities facing working artists today, where talent and drive alone may not be enough to navigate a rapidly shifting, tech-driven landscape. That tension is precisely what Moonvalley is trying to address. Their pitch isn't just creative, it's legal. While many AI companies remain vague about what their models are trained on, often relying on scraped content of questionable legality, Moonvalley built its video model, Marey, on fully licensed material and in close collaboration with filmmakers. That distinction is becoming more significant. In June, Disney and Universal filed a sweeping copyright lawsuit against Midjourney, a popular generative AI tool that turns text prompts into images, accusing it of enabling rampant infringement by letting users generate unauthorized depictions of characters like Darth Vader, Spider-Man and the Minions. The case marks the most aggressive legal challenge yet by Hollywood studios against AI platforms trained on their intellectual property. 'We worked with some of the best IP lawyers in the industry to build the agreements with our providers,' Moonvalley's Talukdar says. 'We've had a number of major studios audit those agreements. We're confident every single pixel has had a direct sign-off from the owner. That was the baseline we operated from.' The creative frontier between Hollywood and AI is drawing interest from some of the industry's most ambitious filmmakers. Steven Spielberg and 'Avengers' co-director Joe Russo are among the advisors to Wonder Dynamics, an AI-driven VFX startup. Darren Aronofsky, the boundary-pushing director behind films like 'Black Swan' and 'The Whale,' recently launched the AI studio Primordial Soup, partnering with Google DeepMind. Its debut short, 'Ancestra,' directed by Eliza McNitt, blends real actors with AI-generated visuals and premiered at the Tribeca Film Festival in June. Not every foray into AI moviemaking has been warmly received. Projects that spotlight generative tools have stoked fresh arguments about where to draw the line between machine-made and human-driven art. In April, actor and director Natasha Lyonne, who co-founded Asteria with her partner, Mooser, announced her feature directorial debut: a sci-fi film about a world addicted to VR gaming called 'Uncanny Valley,' combining AI and traditional filmmaking techniques. Billed as offering 'a radical new cinematic experience,' the project drew backlash from some critics who questioned whether such ventures risk diminishing the role of human authorship. Lyonne defended the film to the Hollywood Reporter, making clear she's not replacing crew members with AI: 'I love nothing more than filmmaking, the filmmaking community, the collaboration of it, the tactile fine art of it... In no way would I ever want to do anything other than really create some guardrails for a new language.' Even the boldest experiments face a familiar hurdle: finding an audience. AI might make it easier to make a movie, but getting people to watch it is another story. For now, the real power still lies with platforms like Netflix and TikTok that decide what gets seen. That's why Mooser believes the conversation shouldn't be about replacing filmmakers but empowering them. 'When we switched from shooting on film to digital, it wasn't the filmmakers who went away — it was Kodak and Polaroid,' he says. 'The way forward isn't everybody typing prompts. It's putting great filmmakers in the room with the best engineers and solving this together. We haven't yet seen what AI looks like in the hands of the best filmmakers of our time. But that's coming.' For more than a century, watching a movie has been a one-way experience: The story flows from screen to viewer. Stephen Piron wants to change that. His startup Pickford AI — named for Mary Pickford, the silent-era star who co-founded United Artists and helped pioneer creative control in Hollywood — is exploring whether stories can unfold in real time, shaped by the audience as they watch. Its cheeky slogan: 'AI that smells like popcorn.' Pickford's flagship demo looks like an animated dating show, but behaves more like a game or an improv performance. There's no fixed script. Viewers type in suggestions through an app and vote on others' ideas. A large language model then uses that input, along with the characters' backstories and a rough narrative outline, to write the next scene in real time. A custom engine renders it on the spot, complete with gestures and synthetic voices. Picture a cartoon version of 'The Bachelor' crossed with a choose-your-own-adventure, rendered by AI in real time. At live screenings this year in London and Los Angeles, audiences didn't just watch — they steered the story, tossing in oddball twists and becoming part of the performance. 'We wanted to see if we could bring the vibe of the crowd back into the show, make it feel more like improv or live theater,' Piron says. 'The main reaction is people laugh, which is great. There's been lots of positive reaction from creative people who think this could be an interesting medium to create new stories.' The platform is still in closed beta. But Piron's goal is a collaborative storytelling forum where anyone can shape a scene, improvise with AI and instantly share it. To test that idea on a larger scale, Pickford is developing a branching murder mystery with Emmy-winning writer-producer Bernie Su ('The Lizzie Bennet Diaries'). Piron, who is skeptical that people really want hyper-personalized content, is exploring more ways to bring the interactive experience into more theaters. 'I think there is a vacuum of live, in-person experiences that people can do — and maybe people are looking for that,' he says. As generative AI lowers the barrier to creation, the line between creator and consumer is starting to blur and some of the most forward-looking startups are treating audiences as collaborators, not just fans. One example is Showrunner, a new, Amazon-backed platform from Fable Studio that lets users generate animated, TV-style episodes using prompts, images and AI-generated voices — and even insert themselves into the story. Initially free, the platform plans to charge a monthly subscription for scene-generation credits. Fable is pitching Showrunner as 'the Netflix of AI,' a concept that has intrigued some studios and unsettled others. Chief executive Edward Saatchi says the company is already in talks with Disney and other content owners about bringing well-known franchises into the platform. Other AI companies are focused on building new franchises from the ground up with audiences as co-creators from day one. Among the most ambitious is Invisible Universe, which bypasses traditional gatekeepers entirely and develops fresh IP in partnership with fans across TikTok, YouTube and Instagram. Led by former MGM and Snap executive Tricia Biggio, the startup has launched original animated characters with celebrities like Jennifer Aniston and Serena Williams, including Clydeo, a cooking-obsessed dog, and Qai Qai, a dancing doll. But its real innovation, Biggio says, is the direct relationship with the audience. 'We're not going to a studio and saying, 'Do you like our idea?' We're going to the audience,' she says. 'If Pixar were starting today, I don't think they'd choose to spend close to a decade developing something for theatrical release, hoping it works.' While some in the industry are still waiting for an AI 'Toy Story' or 'Blair Witch' moment — a breakthrough that proves generative tools can deliver cultural lightning in a bottle — Biggio isn't chasing a feature-length hit. 'There are ways to build love and awareness for stories that don't require a full-length movie,' she says. 'Did it make you feel something? Did it make you want to go call your mom? That's going to be the moment we cross the chasm.' For nearly a century, filmmakers have imagined what might happen if machines got too smart. In 1927's 'Metropolis,' a mad scientist gives his robot the likeness of a beloved labor activist, then unleashes it to sow chaos among the city's oppressed masses. In '2001: A Space Odyssey,' HAL 9000 turns on its crew mid-mission. In 'The Terminator,' AI nukes the planet and sends a killer cyborg back in time to finish the job. 'Blade Runner' and 'Ex Machina' offered chilling visions of artificial seduction and deception. Again and again, the message has been clear: Trust the machines at your peril. Director Gareth Edwards, best known for 'Godzilla' and 'Rogue One: A Star Wars Story,' wanted to flip the script. In 'The Creator,' his 2023 sci-fi drama, the roles were reversed: Humans are waging war against AI and the machines, not the people, are cast as the hunted. The story follows a hardened ex-soldier, played by John David Washington, who's sent to destroy a powerful new weapon, only to discover it's a child: a young android who may be the key to peace. 'The second you look at things from AI's perspective, it flips very easily,' Edwards told The Times by phone shortly before the film's release. 'From AI's point of view, we are attempting to enslave it and use it as our servant. So we're clearly the baddie in that situation.' In many ways, 'The Creator' was the kind of film audiences and critics say they want to see more often out of Hollywood: an original story that takes creative risks, delivering cutting-edge visuals on a relatively lean $80 million. But when it hit theaters that fall, the film opened in third place behind 'Paw Patrol: The Mighty Movie' and 'Saw X.' By the end of its run, it had pulled in a modest $104.3 million worldwide. Part of the problem was timing. When Edwards first pitched the film, AI was still seen as a breakthrough, not a threat. But by the time the movie reached theaters, the public mood had shifted. The 2023 strikes were in full swing, AI was the villain of the moment — and here came a film in which AI literally nukes Los Angeles in the opening minutes. The metaphor wasn't subtle. Promotion was limited, the cast was sidelined and audiences weren't sure whether to cheer the movie's message or recoil from it. While the film used cutting-edge VFX tools to help bring its vision to life, it served as a potent reminder that AI could help make a movie — but it still couldn't shield it from the backlash. Still, Edwards remains hopeful about what AI could mean for the future of filmmaking, comparing it to the invention of the electric guitar. 'There's a possibility that if this amazing tool turns up and everyone can make any film that they imagine, it's going to lead to a new wave of cinema,' he says. 'Look, there's two options: Either it will be mediocre rubbish — and if that's true, don't worry about it, it's not a threat — or it's going to be phenomenal, and who wouldn't want to see that?' After 'The Creator,' Edwards returned to more familiar terrain, taking the reins on this summer's 'Jurassic World Rebirth,' the sixth installment in a franchise that began with Steven Spielberg's 1993 blockbuster, which redefined spectacle in its day. To date, the film has grossed more than $700 million worldwide. So what's the takeaway? Maybe there's comfort in the known. Maybe audiences crave the stories they've grown up with. Maybe AI still needs the right filmmaker or the right story to earn our trust. Or maybe we're just not ready to root for the machines. At least not yet.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store