New Porsche Hypercar Teased as Modern 911 GT1 Successor
In a black-and-white YouTube video hashtagged #RACEBORN, Porsche has teased its upcoming 963 RSP, a one-off road-going version of the LMDh racecar that won races in WEC and IMSA. Back in April, Porsche hinted at an upcoming street-legal hypercar by highlighting its 1975 917 race car, which was also converted for street use. Now we know that the imminent reveal will be of the road-legal version of the 2023 963 endurance racer that won twice at the 24 Hours of Daytona.
The 963 RSP is presumed to be built for Roger Seale Penske - hence the "RSP" moniker that's never been used on a Porsche before. Penske owns the team that runs Porsche's factory endurance-racing program, and is the most successful team owner in motor-racing history, winning four out of five championships in 2024 alone in the FIA WEC and IMSA seasons. The car is expected to stay true to the racer, with the same twin-turbocharged 4.6-liter V8, hybrid system, and seven-speed sequential transmission.
Porsche has done this before when it handcrafted a single road-legal example of the 1996 911 GT1 Le Mans racer that competed in the GT1 class in the 24 Hours of Le Mans - which it called the 911 GT1 Strassenversion. The 963 RSP is similarly handcrafted, and in Porsche's teaser video, Porsche workers sand down carbon-fiber panels, mix paint, do interior stitchwork, and generally apply their specialist trades to finish the one-of-one masterpiece. With the teasers now starting, it won't be long before one of the most covetable Porsches of the modern era is revealed to the world.
Copyright 2025 The Arena Group, Inc. All Rights Reserved.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
32 minutes ago
- Fast Company
New frontiers of brand storytelling: Why the NewFronts matter more than ever
Media is in a constant state of transformation. To understand how we got here, and where we're going, it's worth looking back to 2008 as the tipping point. It was a year most might remember for the global economic meltdown. But at the same time, the future growth of interactive media was forming in the background: Netflix was a year into its transition from DVD mail rentals to unlimited streaming and Hulu launched publicly with YouTube, introducing us to the world of high definition. The same year, my company, Digitas, unveiled a bold vision for the media industry. We recognized that digital media wasn't just a supplement to traditional campaigns; it was the future. That vision sparked the creation of what was first called the 'Digitas Digital Content NewFronts' on June 11, 2008. Today, those values surpass the confines of media. The NewFronts now encompass the world of the broader C-suite, tying brand storytelling to connected media and converging technology more closely together. The core question remains the same: What are the new frontiers for connected brand storytelling across an ever-evolving media landscape? Today's most impactful brands aren't waiting for permission to tell their stories. They aren't hoping for placement opportunities. They're becoming producers in their own right. When brands step into this role, they move from interrupting entertainment to creating content that is authentic, inspiring, and entertaining. Simply put, like any creative production house or media network, brands are focused on making content that connects with people on an emotional level. This shift reflects how audiences perceive and engage with brand messaging. Rather than enduring advertising to get back to content they love, audiences seek out brand-produced entertainment because it resonates with them personally and as part of the wider cultural conversation across every interface that matters to them today. Think about how audiences react to James Bond reluctantly drinking a beer brand that feels disconnected from his character's typical 'shaken, not stirred' martini. It's a terrific example of a traditional product placement—but it's not a meaningful part of the story. It doesn't redefine the relationship between the character and the audience and the brand in between. Audiences expect more from the entertainment presented to them. Those new expectations also implicate brands in viewers' judgment of what's worthy or not. TELLING YOUR STORY BEFORE SOMEONE ELSE DOES All brands face a critical choice right now: Define your narrative or let others define it for you. The phenomenal success of the 2023 summer movie blockbuster Barbie demonstrates how a brand like Mattel's active involvement in storytelling can create cultural moments that transcend traditional marketing. Without the brand's participation, Barbie would have been a vastly different film—and that would have been a major missed opportunity to reshape the brand narrative for new generations. Seeing brands show up as the star or supporting player in big screen entertainment can be a meaningful way to merchandise their brand truth and mission. By striking out to tell their own stories, brands are better able to strike a deeper emotional chord with consumers while elevating and clarifying their commercial objectives. Sephora's 'Faces of Music' showed the depth of the cosmetics retail brand's intellectual and artistic sides through a three-part docuseries. The series offered an intimate look at how these musicians use makeup and beauty rituals to express their identities and enhance their performances. It underscored Sephora's commitment to celebrating individuality and self-expression through the arts. Your brand story is more important than ever, and there is a new role brands' agency partners need to play. It's not limited to creating 30-second spot or arranging the ad buys or product placements. It's about being fully embedded in the creative process itself. Creativity in this new environment is everyone's business. THE RETURN OF APPOINTMENT VIEWING Despite predictions that this endless cascade of content and the mainstreaming of on-demand viewing would kill scheduled programming, we're witnessing a renaissance of appointment viewing. My college-age daughter texts me when The Handmaid's Tale drops a new episode; she wants to watch it immediately to participate in the cultural conversation. This behavioral shift offers a powerful opportunity for brands and their agency partners. By crafting and aligning with 'must-see content' that drives conversation, brands can become central to these cultural moments. The hunger for quality storytelling amid tightening studio budgets can only be supported by brands' largesse. Think of Severance' s collaboration with State Farm, or White Lotus 's alliance with the Four Seasons hotel chain, or connecting Flonase with Bridgerton. These are examples of creating spaces where brands can meaningfully contribute to narratives audiences crave rather than simply advertising alongside them. THE NEW BRAND-PRODUCER AGE Perhaps the most significant evolution is the blurring of boundaries between platforms and experiences. Content discovery has become its own challenge—consumers struggle to find shows even after seeing billboards promoting them. Platforms like Roku are developing interfaces to help content rise above algorithmic limitations. Retailers like Walmart are becoming entertainment companies, just like they're becoming publishers within the retail media network space. It's another way brands are taking advantage of additional channels to reach audiences. Walmart's 23-part holiday rom-com series, Add to Heart, launched across Roku, TikTok, YouTube, and its own platforms. As the drama played out, viewers could shop over 330 featured products directly from the episodes. On Roku, users could click a button to get product links sent to their phone for easy shopping. This 'RomCommerce' experience blended entertainment and e-commerce to appeal to younger shoppers during the holidays. This convergence of commerce and content opens new possibilities for integrating storytelling with business outcomes. The new frontier isn't just 'branded content'—it is about competing for attention against entertainment itself. From documentaries that continue to resurface during culturally relevant moments to intellectual property partnerships that feed audience hunger between seasons, creating 'culture-breaking content' that stands out in a crowded landscape is critical.
Yahoo
an hour ago
- Yahoo
Protein Powder Market Size Predicted to Exceed USD 39.15 Billion by 2031, Rising at A CAGR of 6.1%
The protein powder market is growing rapidly, driven by increasing health consciousness, fitness trends, and rising demand for plant-based protein sources. Protein powders are popular among athletes, fitness enthusiasts, and individuals seeking nutritional supplementation. Key players in the market include Nestlé S.A., Glanbia PLC, ADM (Archer Daniels Midland Company), PepsiCo, Inc., Herbalife Nutrition Ltd., and The Simply Good Foods Company. US & Canada, June 05, 2025 (GLOBE NEWSWIRE) -- According to a comprehensive new report from The Insight Partners, the protein powder market is observing significant growth owing to the burgeoning focus on muscle building and fitness worldwide. The protein powder market report comprises a detailed analysis of different varieties of protein powder, such as whey protein powder, soy protein powder, pea protein powder, and casein protein powder, and their sales through retail stores and online shopping websites, which together contribute to the market growth in the coming years. The report runs an in-depth analysis of market trends, key players, and future explore the valuable insights in the Protein Powder Market report, you can easily download a sample PDF of the report - Overview of Report Findings Market Growth: The protein powder market value is expected to reach US$ 39.15 billion by 2031 from US$ 25.77 billion in 2024; it is estimated to record a CAGR of 6.1% during the forecast period. Protein powder helps fulfill the daily protein requirements of an individual, gain muscle mass, and ensure significant muscle recovery. Different types of protein powder available in the market include whey protein; casein protein; and plant-based protein such as soy, pea, and mung bean powder. Rising focus on muscle building and recovery: In recent years, a large number of individuals have shifted their focus toward fitness routines, strength training, and active lifestyles due to the rising health concerns, mainly triggered by a stagnant lifestyle, junk food consumption, and lack of physical activity. These routines drive the need for high-protein nutritional support. Protein plays a prominent role in repairing muscle tissues, enhancing muscle recovery post-exercise, and supporting lean muscle growth, making it a staple supplement for athletes, bodybuilders, and fitness enthusiasts. Non-athletes are also recognizing the benefits of protein for general wellness, including maintaining muscle mass during weight loss and supporting healthy aging. Rising social media and celebrity influence: The rising influence of social media and celebrities presents significant growth opportunities for the protein powder market. Social media platforms such as Instagram, TikTok, and YouTube have become powerful tools for shaping consumer behavior, especially in the health and fitness space. Fitness influencers, athletes, and celebrities frequently endorse protein powders, showcasing them as essential components of their wellness routines. These endorsements not only increase product visibility but also build consumer trust and aspiration, particularly in young individuals who form one of the largest user bases of social media. Celebrities posting their workout routines also emphasize on regular protein intake, which boosts the popularity of protein powder brands. Geographical Insights: In 2024, North America led the market with a substantial revenue share, followed by Europe and Asia Pacific, respectively. Asia Pacific is estimated to register the highest CAGR during the forecast period. For Detailed Protein Powder Market Insights, Visit: Market Segmentation Based on product type, the protein powder market is segmented into whey protein powder, soy protein powder, pea protein powder, casein protein powder, and others. The whey protein powder segment held the largest share of the market in 2024. Based on category, the market is bifurcated into organic and conventional. The conventional segment held a larger market share in 2024. In terms of distribution channel, the protein powder market is segmented into supermarkets and hypermarkets, specialty stores, online retail, and others. The specialty stores segment held the largest share of the market in 2024. The protein powder market is segmented into five major regions: North America, Europe, APAC, Middle East and Africa, and South and Central America. Stay Updated on The Latest Protein Powder Market Trends: Competitive Strategy and Development Key Players: A few of the major companies operating in the protein powder market are Glanbia Plc, MusclePharm, Abbott, CytoSport Inc, QuestNutrition LLC, Iovate Health Sciences International Inc, The Bountiful Company, AMCO Proteins, Now Foods, and Transparent Labs. Trending Topics: Sports nutrition, bodybuilding, vegan protein, plant-based protein, dairy-free protein powder, amino acid profile, clean-label protein, etc. Global Headlines on Protein Powder Eat Just Launches One-Ingredient Mung Bean Protein Powder at Whole Foods Qualify Protein Releases New Wild Strawberry Whey Protein Powder 'Zyra Protein' Launches First Ever Plant-Based Protein Powder with Just Four Ingredients Darigold Launches High-Protein Shakes Amid Rising Demand Purchase Premium Copy of Global Protein Powder Market Size and Growth Report (2021-2031) at: Conclusion The protein powder market is experiencing strong growth, fueled by rising health consciousness, increased participation in fitness and sports activities, and growing awareness of the role of protein in muscle building and recovery. Consumers across various age groups are turning to protein supplements to attain general wellness, weight management, and healthy aging, in addition to delivering athletic performance when needed. The demand for protein powder varieties is further supported by product innovations—evident through plant-based options, clean-label formulations, and personalized nutrition—catering to diverse dietary preferences and lifestyles. The report from The Insight Partners provides several stakeholders—including soy manufacturers, pea manufacturers, dairy farmers, protein powder manufacturers, distributors, and retailers—with valuable insights into how to successfully navigate this evolving market landscape and unlock new opportunities. Talk to Us Directly: Trending Related Reports: About Us: The Insight Partners is a one stop industry research provider of actionable intelligence. We help our clients in getting solutions to their research requirements through our syndicated and consulting research services. We specialize in industries such as Semiconductor and Electronics, Aerospace and Defense, Automotive and Transportation, Biotechnology, Healthcare IT, Manufacturing and Construction, Medical Device, Technology, Media and Telecommunications, Chemicals and Materials. Contact Us: If you have any queries about this report or if you would like further information, please contact us: Contact Person: Ankit Mathur E-mail: Phone: +1-646-491-9876 Home - in to access your portfolio


The Verge
an hour ago
- The Verge
Runway CEO Cris Valenzuela wants Hollywood to embrace AI video
Today, I'm talking with Runway CEO and cofounder Cris Valenzuela. This one's special: Cris and I were live at an event in New York City last month hosted by Alix Partners, so you'll hear the audience in the background from time to time. Runway is one of the leading AI video generation platforms. The basic concept is familiar by now: you start with a reference image — either something you've created using Runway's own model or something you upload — you type in a prompt, and Runway spits out a fully formed video sequence. But what's most interesting to me about Runway is that while the AI hype is at a fever pitch right now, there's a little more depth to the company. Cris founded the company back in 2018, so he's been through some boom-and-bust periods in AI, and you'll hear that experience come through as we talk about the technology and what it can and can't do. When Cris began to more seriously explore AI video generation, as a researcher at New York University, we still mostly referred to AI as 'machine learning,' and you'll hear him recount how primitive the technology was back then compared to now. That said, the AI hype really is out of control, and Runway is on the same collision course with creators, artists, and copyright law as every other part of the AI industry — and you'll hear Cris and I really get into all that here. One theme you'll hear Cris come back to again and again in this conversation is that he does not see Runway as a disruptive outsider to filmmaking, but rather as an active participant in the art. He sees Runway as a tool that will bring filmmaking and other forms of artistic expression to many more people, and not as an apocalyptic force that's going to hit Hollywood like a wrecking ball. You'll hear him say Runway is working with many of the biggest movie studios — publicly, it has already struck a deal with Lionsgate and AMC Networks. In the AMC announcement, Cris said embracing AI video generation was a 'make-or-break moment' for every entertainment company. But cozying up to Hollywood doesn't mean Runway is off the hook in the AI vs. art debate. In fact, Runway itself is part of an ongoing class-action lawsuit over the use of artistic works in AI training data. Last year, it was revealed Runway had trained on huge swaths of copyrighted YouTube material, including The Verge 's own YouTube channel. So I asked Cris as plainly as I could whether Runway had in fact trained on YouTube and how the industry might survive a world where all these companies are made to pay substantial amounts of money to creators if even one of these big AI copyright lawsuits doesn't break their way. I think you'll find our discussion on this to be pretty candid, and Cris articulated some of his own defenses for how the AI industry has approached this topic and what might happen next. It's Decoder, so of course we also talked about Runway's structure. Cris has a lot to say about Runway functioning as a research lab, and the tension that exists between releasing and refining real products and then putting them into the hands of professionals, all while working on new models and tools that might make the current tech obsolete. This interview has been lightly edited for length and clarity. You started Runway before the big AI boom. We were joking earlier that the URL is because people were calling it machine learning before. What's changed since the boom in that approach? Have you had to rethink, 'Okay, everyone understands what training a model is now, and the market for GPUs is more expensive.' What are the changes? A lot has changed. I think we started the company in 2018. Machine learning was the way we referenced the field of AI broadly. I think a few things have changed. First of all, models have become really good. I mean, it's obvious to everyone. I hope everyone here has used an AI model by now. I'm assuming that has happened. Seven years ago, no one had. I think consistency, quality, and overall output of models across the board have gotten really good, and that has just changed people's experiences with AI. I think the second thing that is becoming more real is the value of these models and how useful they are. It's becoming more evident to many people. A couple of years ago, it was more theoretical about how they could potentially be used. There are still many avenues where we don't entirely know how AI will change things. We just know it well. In some others, it has really changed many things. In learning and education, it's pretty clear that pretty much every student out there, from now into the future, will start using AI models to learn. But I think that has happened. Then competition, of course. Now, everyone's paying attention to this. When we started, there was really no one trying to build. If you had this same conversation eight years ago, and I told you we're going to have AI models that can render video in hyperrealistic ways, people would think we were crazy. Now, it's an obvious direction, and there are a lot of people also trying to solve the same problem. Was your ability to actually do the work constrained by the amount of compute you had at the beginning? Is it just scaling laws that brought you to where you are today? So, scale is one of the main things. I think we've realized, as an industry, that scale matters. I guess the lesson that we've seen over time is that if you just scale computing, then models work really well. I think at the beginning, it wasn't that obvious. It became more obvious over the last couple of years. And then more compute definitely helps, but more compute and more data, and also better algorithms. So it's not just one single ingredient. It's not just that if you get more compute, suddenly, things get better. I think it's a combination of different things. Just put this into practice for me. When you guys first started, how long would it take to render a frame of video versus how long now? When we started, you couldn't. That's the thing. The first thing we ever did was a text-to-image model that produced 256-pixel-wide images. If you've ever seen a Mark Rothko painting, it was very abstract. That's the closest it could get. So if you wanted to render a face, a house, or whatever, the result was in the range of colors, but it was very off. We went from that pixelated, very low-res image to 4K content that's 20 seconds long with very sophisticated movement and actions. I think it's the realization that at that time, video was not even in the scope of what we thought was possible. Then, over time, it became really feasible. Now I think we joke that we're consistently moving the goalpost, where the feedback we get from Runway is like, 'Great, Cris. You can generate that bouncing ball on Mars, but in frame 27, the ball's direction is slightly off.' I'm like, 'Great, that's a great piece of feedback,' because we will solve it. But also, you don't realize that a year ago, you just didn't think this was possible. One of the reasons that I see the big platform companies are so invested in video generation, in particular, is that they're pointed at the advertising industry. You mentioned you have advertising clients. Mark Zuckerberg is not even subtle anymore. He's like, 'I'm going to kill the advertising industry.' He just says it out loud. I think he also said something similar at Stripe Sessions a couple of weeks ago. His pitch was something like, 'You don't even have to do anything. Just come to us and tell us how many customers you want, and maybe some ideas about what your product is. I'll generate video advertising, and I'll stick it in the feeds, and you just watch the money roll in.' This is a very Mark Zuckerberg way of thinking, but that is the first big market where you see we're going to bring the cost of making the ads down, and that will result in some return. Is that where the demand is coming in for you as well? I think that's a very appealing concept and world for many people who have never had the chance of making ads in the first place. There are many businesses out there that just can't afford to work with an agency to get a production team to shoot a AAA film or ad. I think part of it is like, 'Well, if you can actually help others do that, I think that's great.' It definitely increases or raises the bar for many because now anyone can do it. I think it's less about killing the ad agencies; I think that's an overall simplification. I think it's more about reducing the time it takes to make something. The cost of making any piece of content will, hopefully, go down to the cost of inference. So if you're good at making things and conceptualizing ideas, you're going to have systems that can aid you in generating whatever you need, but you still need to have a good idea. So you will still have agencies, you'll still have talent and creatives, but perhaps the time it takes to make things is just going to be dramatically reduced. Hopefully, that opens the door for many other folks to do this work. Yeah, I mean, I think Mark wants to kill the ad industry. [Laughs] Yeah, we should ask him, I don't know. He's a very aggressive human being. But the reason I ask that question is because I see so many of these products and so many of these capabilities, and they haven't yet connected to business results. There was a study from IBM last month stating that 25% of the AI investments they had seen in companies had returned on that investment. It's a low number. Everyone's trying stuff and figuring it out. I get it in advertising. I understand that's just the cost of acquiring customers. Have you seen places in film studios and other places where just bringing the cost down is worth the investment? Yeah, absolutely. I was just on a call with a studio right before this, and we were going through a script that they wanted to test with Runway. I don't know if you guys have ever worked in film, but you develop the script, and the common thing to do next is a storyboard. So, you basically take the storyboard and someone spends a week or two weeks just drawing. This is for a scene or a couple of scenes, not for an entire film. It's really long, really expensive, and time-consuming. So, when they were reading me through the part of the script where they needed our help with Runway, I was generating the storyboards on the fly. By the time they finished, the storyboard was done. So, I think the first thing was that they couldn't realize or fully understand what was going on because they had never worked at that velocity, that speed. For them, speed is also cost. If you have to compound the time it takes to make all of those storyboards by hand and they have the screenwriters doing it in real time, then it shrinks the time and the whole project gets developed and worked on. So, you have all these moments and gaps where AI can really just help you accelerate your own work, specifically in creative industries where things are still very manually done. I actually want to ask you about that because I know you think a lot about the creative industries and the act of creativity. The counterargument to that is the gap between the screenwriter and the storyboard artist, and the time it takes to communicate and translate is where the magic happens. Having the AI collapse that into a mechanical process, as opposed to a creative process, actually reduces the quality of the creative. How do you feel about that? Yeah, I don't think I fully agree with that. I think part of it is, I think, that we sometimes obsess about the process of how we make things. The goal of the screenwriter is to get the ideas that he wants in his mind or his world out there. The most obvious ways you work with the set of technologies and tools around you, if you're able to do it faster, I think that's great. You can iterate on concepts faster. You can understand your ideas faster. You can collaborate with more people, and you can make more. One of the bigger bottlenecks of media these days is that you have people working on one project for three or four years, then you might actually work on it, and the studio might actually try to kill it for many different reasons. So, if you think about it, you spend four years of your life working on a thing that never saw the light of day because it happened to be killed for whatever reason. I think the idea will be that you don't have to work on one project. You can work on many more. So, that's also the quantity prospect of it that becomes a component we should consider. Because right now, we're bound by the way we're working. It's very slow, and it's very constrained by all these processes. If you can augment that, then people can start doing more and more and more. I think that's great. Is that the model for you? Is it that quantity will drive the business? I think quantity leads to quality. As an artist, the more you make, the better things you'll do. No artist has drawn once and thought, 'Oh, suddenly, I'm a master.' Picasso painted hundreds of thousands of paintings, and many of you have never seen all of them. You just see the 1%. The same goes for musicians. People are there playing every single day until they hit something that actually works. I think tools should be like that. They should be able to augment how you work so you can do more, and then you're the one choosing what you're doing. But look, I started the company because I always wanted to make films. I grew up in Chile, and I've never had the means of even buying a camera in the first place. I got my camera when I was 27 years old. It was pretty late, and part of it was very expensive. I couldn't afford Adobe software because it was very expensive back then. I probably wouldn't have become a greater filmmaker, but it would've been great if I had the chance to tell the stories that I had in my head. I think it was a technical barrier that prevented me from doing so. Now we have kids in every part of the world using Runway and making those ideas, which I find just fascinating. It's great. It's very simple. It's a subscription. You just pay for the product, and you get access to different parts of it. We have a free tier, so you can also just use it for free. Then we work with schools. There's a course at NYU, the NYU Film School, that teaches students how to use Runway. So, instead of going to film school and giving you a camera, they give you Runway. We're doing that with a few other schools as well. For all of those, we just give access for free. The studios you partner with, do they pay a lot of money, or are they subsidizing it for users? No. For businesses, we charge. I mean, students can pay, but also, they pay because it's useful. If it helps you do something, then sure, the value is worth it. Are you profitable yet? No, we're growing, and I think a part of what we're doing is just investing in research more than anything else. What's your runway? [Laughs] We've been obsessively working on this. I would say over the last 12 to 18 months, the models got to a place where you can actually do very good things with Runway. I think there's always an optimization function that companies have to run, which is, 'Do you want to optimize for whatever is working now, or do you want to keep on growing?' I think for us, we really want to keep on growing. There's a lot of research we can invest in and a lot of areas of growth that we can keep on going. So, I think the tension right now has always been like, 'Do we want to optimize for this, or what's next?' I think we want to lean into what's next. I think there are a lot of things we haven't actually fully discovered that we could do that we want to do. It's very lean. Someone thought the other day that we were 1,000 people, and I thought that that was the best compliment that you could give me. We're like 100 people or so. It's very flat, and very focused on autonomy more than anything else. What we do is less of objectives and we actually don't believe in objectives. We have a way of working where we just set boundaries and where we want people to do research or explore because a lot of what we do has never been done before. So, if I tell you how to get there, I'm probably wrong because we've never done it. So, it's research. You have to experiment and fail. What we do is we set their constraints and the boundaries on where we want you to experiment. The best outcomes of the research we've done have been about setting the right boundaries and then letting people go, letting people work on their own, and figuring out on their own how to do it. So are you full holacracy, no org chart? I mean, there's some org chart in some way, but people collaborate. We have a studio, an internal studio with creatives, producers, and filmmakers working along with research. Those people are sitting at the same table, speaking the same language. They come from different backgrounds, but they managed to collaborate. So, yeah, that's when you want to promote. One of the reasons I'm interested in asking that question, particularly of AI companies of your size, is that there is a deep connection to the capabilities of the model, the research that's being done, and the kinds of products you can build. I haven't seen a lot of great, focused AI products. Runway actually might be one of them. But in the broad case, there's ChatGPT, which is just an open-ended interface to a frontier model, and then we're going to see what happens. Do you think that as you get bigger, the products will get more focused, or do you think you still need the connection between the team building the model and the product teams themselves? I think the connection between product and model helps the product team better understand what's coming. So, you need to understand that the way tech used to work was in much lower cycles of R&D. Now, research tends to move in very fast cycles. So, the issue with product, and I think product is one of the hardest things to do right now… You scope the area of product that we work on, design it, and start building it. By the time you build it, it's obsolete. You've basically lost six months of work, or however long it takes you. So, product needs to behave like a research organization. The way we tell our team is like, look, we have research scientists working on research, but everyone in the company is a scientist because everyone is running experiments. So, before you spend too much time doing something. Run an experiment, build a simple prototype, and understand if it's worth it. Then, check with research to see if they think the thing you're working with is going to become useful, or avoid getting submerged by the next generation of models. What happens a lot is that our customers are coming to us with specific questions like, 'Hey, the model does this, but it doesn't do this. Can you build a specific product for that?' We could build a product just for that, or we could wait for the next generation of models that would just do all of that on the fly. So, that's the tricky part because you're always trying to play catch-up. I think companies that understand research are much better positioned than companies that are trying to catch up. There's a comparison I keep making here that you're not going to like, but I'm going to make it anyway. I started covering tech a million years ago, now with gray hair and a beard. When Bluetooth came out, everybody knew what the product was going to be, right? Everybody saw the headsets. Every real estate agent in America had a giant Motorola headset, and it's like, 'Oh, you want AirPods? We want AirPods.' But the standard was just not ready for another decade, and then Apple had to actually build a proprietary layer on top of the standard to make AirPods. That took a full decade. It was just not ready. There was a real dance there between, 'What do we want to build? What's the product? Can we build it, and does the technology support our goals?' You're describing the same dynamic. The thing that gets me about what you're describing is, well, the model's just going to eat the product over and over and over again. How do you even know what products to build? Yeah, it's very hard. Because everyone can see the AirPods, right? Everyone's like, 'The computer is going to talk to me, it's going to be fine.' Yeah, but I think that's more than just 'the computer will talk to me.' I think there are parts of how it will talk to you, and when it uses emotion. There's a lot of product that goes back into research. I think no one really knows, to be honest, what the future product experience would look like because a lot of the interactions we're having, we just never thought we could have. So, you're only going to realize by having people use it. I think that happens a lot in research, where researchers spend so much time retaining and doing all the work, then you put it out, and in two minutes, someone figures out how to use it in a completely different way. Actually, I think that's great. It points to the fact that I think the previous generation of software was based on this idea of you choosing a vertical and just going there. I think the next generation of software is based on you choosing a principle of how you want to operate in the world, and you build models towards that. Our principle is that more of the pixels that you will watch will be generated or simulated. That's the surface that we're operating on. Therefore, you can go into many different products based on that idea. So, it's the difference between choosing a vertical and choosing a principle in which you want to operate. But right now, as you're deciding what products to build, you're getting market feedback from users. You have studios using the tool and agencies using the tool. You've got to make some decisions. We do. Where are we going to fill the gaps of the product, and where are we going to wait? How do you make those decisions? We focus a lot on research and on understanding what's coming and what's worth building. I think there's always a trade-off, specifically with startups, where if you spend too much time working on the wrong thing, it might actually kill you. I think we listen to users, but sometimes users don't really know what they want. They know the problems really well, but they can't articulate the exact solution for it. So, you don't find yourself building exactly what they're describing because they can't describe the thing that they don't know it's coming. So, I don't know. I think it's like art, I guess. You become just really good at intuition and being like, 'Okay, that thing, even if it could be a great deal now, we're not going to do it right now.' So I think companies overall build intuition, and that's just experience of doing it enough times and then saying no. You have to say no a lot of times. Customers come with great ideas, but just say no. Not because you don't think you can solve for them, but again, because it will trap you into the wrong thing for the wrong reason. This is the other question I ask everybody broadly. How do you make decisions? What's your framework? How do I make decisions? What kind of decisions? All of them. I think there are different decisions. There are decisions that are much more long-term and irreversible, and decisions that are much more reversible. I think we're very much of the idea that, again, run experiments and be willing to understand if you are wrong in your assumptions. If you need to make a decision, do it because you're confident it will work. If it doesn't, you can change your mind. Sometimes, product decisions come from that taste component. I think overall taste has become a good way of directing the company, I would say, from how we operate in marketing and how we hire. I don't think there's one particular framework, but just the overall idea of taste and intuition has become clear in how we make decisions. Do you think you're going to have to change that as you hit the next set of scale? At 100 people, you can be like, 'Just listen to me.' With 1,000 people, maybe not. That's the thing we keep referring to is the idea of a 'company company.' We don't want to be a 'company company.' A 'company company' is a company that behaves like a company because that's the way companies behave. You're like, 'No, don't do that. Be a company that's focused on solving a problem, a research constraint, or a user need. Don't focus on the things that are superficial that you're supposed to be doing just because you're a company.' Because the moment you lose that, you're dead. You're going to stop innovating. You're going to focus on the wrong things to optimize for. I think just culture, maybe, reinforces this to the team. I still interview everyone in the company. I'm still pretty much involved in how we make decisions on product. Organizations tend to seek slow velocity if they're not constantly pushing all the time. Do you think there's going to come a point where the split between the capabilities of the underlying model slows down, and that you have to put more into product? Maybe, but I don't think we're close to that. Even if we stop research now, like we decide collectively to stop research, I think there are 10 to 20 years of innovations that are just there, latent, waiting for someone to discover them. I don't think we're at that point yet where you can say, 'Hey, this is enough,' because I think there's just too much space to grow and have models to think. We just released a model two weeks ago, and I'm not kidding. Every day, I open our users on Twitter and Instagram, and there's a new use case. Now, just before coming here, someone was using it for clothes. So, you can try on anything. You basically go to any shop online, like an eCommerce site, upload a photo of yourself, and see yourself wearing that in a hyperrealistic manner. I just never thought you could use it for that, and you can. So, yeah. I was talking to Kevin Scott, the CTO of Microsoft, and he made the same point in a slightly different way. He said there are more capabilities in the models we have today than anyone knows what to do with. I agree. To me, it's like, 'Well, then we should start building products that make sense.' But then the tension is whether the next-generation models are just going to eat my product. When does that get stable enough so anybody can make products that are good? So here's a great example. That's a great distinction between verticals and principles. If you think about a vertical, then you'll choose a solution and you'll build towards that. If you think about a principle, you should assume that many of the things that we're trying to build into the product will eventually become features of new models. Therefore, your product should be many layers ahead if you want to spend time on it. So, their principles should be, for example, image generation, zero-shot. So, zero-shot learning (ZSL) means if you want to model to do something, you don't have to train it. You need to just show it examples. You can widely expand the range of things models can do if you have the right examples. So, maybe a good idea is to find and collect examples of things you can teach models for, and then it changes the way you can approach product. I think that the distinction between principles and verticals is relevant for that. One of the big trends in the industry is that the cost of every new model is getting exponentially higher. Sam Altman is touring the capitals of the world, being like, ' Can I have $1 trillion? ' Maybe he'll get it. You never know. He might get it. Yeah, maybe. Are you on the same cost curve where every new model is that much more expensive? Do you have $1 trillion? Is the answer yes? If you have one. So, I think AI tends to move in two ways. There's an expansion wave and an optimization wave. Expansion is like, well, we're discovering what we could do. If you think about the models from two or three years ago, yeah, they were expensive. Now, most of those models can be trained on your laptop because models have gone into a state where you can optimize them. One thing engineers love is optimizing things. So, if you tell them, here's the thing that works, optimize it, people will go very hard on it. For some models that are two or three years old, now that's the case. They're very cheap to train from scratch. I think there are new models that are still in the expansion phase. We haven't figured out exactly how to optimize them, but we will. But the thing that happens is the same thing if you spend too much time optimizing them; the trade-off is going to stop working on the new expansion. I think most companies these days are betting on expanding. So, they're betting on paying more for the sake of expanding that and not falling behind, rather than trying to optimize and reduce the cost of the thing that works. Where are you? I think we're on the expansion side. Having the ability to expand that, having the ability to innovate on that, it's way harder. And then having the ability to just catch up and play the optimization game is easier. I think our bet is like, well, this is the advantage point where you can keep on moving things and just pushing boundaries. The big platform companies, Microsoft, Google, Amazon, and OpenAI — which has a deal with Microsoft — run their own hyperscalers. Is that a competitive threat to you? Is that an advantage to you? Well, Google is an investor, so we work closely with them. Again, they're different functions of businesses. If you're a hyperscaler, you're probably in the business of optimizing things. You need to make things cheap and scalable for everyone. It's a different function from a research lab, which is building new things. So, again, it's probably good to pair the two. Because if you have a good research lab without optimization, then there's a transfer you can make technology-wise that will allow companies to just run on the things, sell them, and then get feedback. This is while the other part of the company is working on the next thing, which is where we are. If Google's an investor, you're running on [Google Cloud Platform]?. That's correct. So do you just let them buy the Nvidia H100s? Do you worry about that at all? Nvidia is also an investor. The AI industry is full of this, by the way. It's very obvious. Well, I think it's people who have seen this, and I think you want to provoke this. Many of the things we're discussing now weren't that obvious eight years ago until many people started to make the right bets on it. I think again, depending on where you are, it might be a good function to partner with people who get it and who want to work with you long-term. I think the people we work with can help us get to that point. Yeah. I think Nvidia as an investor is one of those things about the AI industry that is very funny, right? They're investing in the applications that drive the usage of their chips and all these places. Maybe some of them will pay off, and maybe they won't. That's the nature of investing, but at some point, everything has to add up to actually deliver a return for Nvidia. Do you feel that pressure that Runway has to be a big enough business to justify all of the infrastructure expenses? I think that justification comes from the value you see with customers and the adoption that you see. I think that's how you see AI in products go from zero to many millions of revenue in a couple of weeks or months, something that was unseen before. It's because it is such a different experience, it's such a different value that if you're ambitious about it. I think yeah, it will definitely get there. We're already seeing this. Still, video, for example, is very early. Gen-4, our latest model, is literally a month and a half old. So, most of the world hasn't experienced it yet. It's also a distribution problem. How do you get to everyone out there who can use it? Are you at millions in revenue? Yeah, more than that. Do you have a path to billions in revenue? We hope, yeah, over the next couple of years. I'm asking because all these companies have to generate billions in revenue for all these investments. I think they will. Many will. I mean, again, think about different first principles. If you're in the business of ads or movie-making, you're spending hundreds of millions of dollars to make one movie. If I can take that process and help you do it for a couple of million, then all the delta, I can literally charge for whatever delta I'm helping you improve. Hopefully, I can charge you way less, so you can actually do more. If you expand that, then you're also not helping them, but you're expanding the window of who can do that thing in the first place. Because if you think about professional filmmaking, it's a very niche, small industry, mostly because it's very expensive. Well, if I have something that makes it cheaper, then I can expand their definition of who can get into the industry in the first place. From a market perspective, that's great because you've got many more people who can do something that they never thought they could. The film industry is really interesting. It's under a lot of pressure, so much pressure that HBO Max just keeps renaming itself every six months to get whatever attention it can. It's great. It works, I guess. But fundamentally, they're competing with TikTokers and YouTubers, right? Netflix knows this. Netflix knows that YouTube is its biggest competition. The cost to make a YouTube video is already only a fraction of the cost to make a Marvel movie, and that has basically put the movie industry under a ton of pressure. Do you think AI can actually shrink that gap and keep the quality high? Yeah, so I think that's the point. I think the last frontier was low-quality content that anyone could make. I think that's TikTok and YouTube. There are billions of people out there making everything. The difference between that and a high-production studio is the quality of the content, the output, and how good the output of the pixels and the videos is. That, for me, is mostly a technical barrier. It's not a storytelling one. It's not an idea one. Making a high-end science fiction movie is really expensive because you have to hire so many people and work with software that is very expensive. So the last frontier I would say for us, and I think many media companies, is billions of people making high-end content. That is the one idea that I think if you're in the traditional business of media and you haven't realized that yet, you're probably very scared because then you'll compete with anyone in any part of the world who has a small budget, very good ideas, and can make amazing things. We're already seeing this. The Academy Award for animation this year, I don't know if you've seen it, went to a a movie called Flow. Very small budget, I think less than $10 million. It was just a very good group of people working with great software, and they won the Academy Award against $100 or $200 million productions. It's just because you have very smart, talented people working with the right software tools. So the flip side of this is those studios are also jealously protective of their IP. That's the thing that they monetize. They window it into different distribution channels and into different regions. They sue pirates who steal it on BitTorrent. You trained on a lot of this content. There's reporting that Runway trained on a bunch of YouTube channels, including The Verge 's, by the way. There's your $1 trillion. This is, in my mind, the single greatest threat to the already exorbitant cost structure of the AI industry. There are lawsuits everywhere that might say you have to pay all of those creators for their work. Have you thought about that risk? I think it's part of how we analyze and how we work. We've worked with different studios and companies to understand how to train the models for the needs that they have and what they want to do. Still, it's crucial for me to help everyone understand what these models are actually doing. A lot of the assumptions that we get around AI video are that you type in a prompt and you get a movie. Now it happens less often, but I used to get a lot of scripts in my inbox where people would say, 'Hey, I'm a producer or a writer. I've been working on this show. I have the whole script done. It's great. I heard you do AI videos. So here's the script, make my movie.' I've realized a lot of people thought that what AI video, AI pixel generation, or making videos with AI meant was that you type in a prompt and you get the entire movie that you thought you were going to get. No, it doesn't work like that. It will probably never work like that. You're still pretty much involved. You need to tell the model how to use it. You need to tell the model the directions and the inputs you want to use. I think part of it is that perhaps most people's experiences with AI over the last 12 months have been through chatbots. So the idea of AI has been condensed to this idea of chatbots. If you have a chatbot, you have AI, and those things are summarizing a huge field into a very oversimplified concept. So when you think about copyright and you think about creating things, I think all the weight is still in what you are making. You're still in control, and these are not tools that will make things on their own. You are the one deciding how to make them in a way. So you have to be responsible in how you use them. That's basically the point. But to train the model, you need to ingest a huge amount of data. The two things that make the models more effective in an expansion mode are more compute and more data. Have you thought about whether you're going to have to pay for the data you ingested into the model? So we've done partnerships to get data that we need in particular ways, but again, it's really important to understand that these models are not trying to replicate the data. I think the common misconception is that people make is that you can type in a scene of a movie and you get the scene of that movie in Runway. These are not databases. They're not storing the data. They're learning. They're students learning about data, getting patterns within that data, and they use that to create something net new. So the argument that I think is really important to consider is that these systems are creating net-new things, specifically for videos. They're creating net-new everything pixels. The way you use them should be in a responsible way, of course. The models are not trying to store anything. So that for me is the distinction because it changes the argument of how you think about training models in the first place. If you think about them as databases, you're going to have a set of different assumptions, use cases, and concerns than if you think about them as general-purpose tools like a camera. I always think of Runway as a camera. A camera allows you to do anything you want. It's up to you how you want to use it. You can get in trouble for using a camera, or you can make a great film by using a camera. So, you choose. It's shockingly easy to get in trouble for using a camera. [Laughs] Yeah, I know. I grew up in Chile. There are a lot of films I didn't manage to see [in theaters], and the way I saw them was that I bought them as bootlegs on street corners. I don't know if you've ever seen one of those where people stand in the theater and just record the thing. I mean that was a bad use of cameras, but I think the overall assumption as a society was like, 'Let's not ban cameras. Let's actually have a norm in theaters where you can't do that. If you do, you're going to get in trouble.' I think we all agree that that's a good thing to do. That argument is weaving its way through the legal system right now. There are lots and lots of court cases. The last time we went through this, it was basically Google that won a bunch of court cases about building databases. But Google was a friendly young company that had slides in the office; people wore beanies when they went to work. The inherent utility of Google's structure was very obvious to every judge. The inherent utility of YouTube, which got in a lot of trouble, was very obvious to every judge. They horsepower their way through it. They had to pay some money to some people, and they had to win some cases. They had to invest a lot into litigation, and they won because they were cute and they were Google. It was a very different companies are not broadly thought of as young and cute anymore. No one thinks of Meta, Amazon, and Google as adorable companies that should build the future the way that they were at the time. Have you thought about the risk that they might lose these cases and what that would do to your business? Because this dynamic you're talking about — whether this is a non-infringing use, whether there's broad utility here — this argument goes back to the Betamax case in the '80s. It's all there, but it doesn't have to go the way that it always did, right? Judges are just a bunch of people, as we've discovered here in America. They just make decisions. What if it doesn't go your way? Yeah, again, it's hard for me to have an opinion on every single case out there. I think it's more complex than that. I think Google has had a great impact on the world at large. I think it's hard to disagree on that. I think the world has gotten way more expansive. Information has become more accessible to many. I think that's hard to disagree with, right? I think there are definitely new challenges with every new technology. I don't disagree with that. I mean, you are putting really powerful technology in the hands of everyone, which means everyone, right? So there are use cases around AI that you should be preventing, and you should try to make sure you have systems of regulation and safety on top. I think every company is different. One thing I've really learned about tech, and I mentioned this as an artist… I went to art school, and I started working on tech mostly as a way to develop my vision of how art should work with tech. That was my idea. So I still consider myself an outsider to tech, and I think one thing I would consider is that not everyone operates in the same way. I think not all companies are the same. Companies tend to be different in how they operate, and I think there are different ways of managing through this change. It's hard for me to group everyone in the same group and say, 'Yeah, all tech companies are basically doing the same thing.' Let me try this a different way. You trained on YouTube channels, right? We train on a variety of different data sets, and so we have teams working on image, video, text, and audio. We don't disclose how we train our models because that's unique to, I guess, our research. Did you train on YouTube? Again, we have a variety of different data sets that we use to train our models, depending on the task. It's not about, 'Do we train on this, on that?' We have agreements with different companies. We have partnerships with others. The way we train is very unique to us. It's very competitive over there, so we're probably never going to tell how we do it because it's very unique to how we train our models. YouTubers own the copyrights to their videos. If it comes out that you trained on YouTube and hundreds of YouTubers come asking you for money at whatever rates, is the financial model of Runway still tenable? I guess it goes back to what these models are doing, right? Well, I'm saying that if OpenAI loses its case against the New York Times and training on the Times' content is found to be infringing, the floodgates will open. It is not clear if OpenAI will win or lose. If Meta loses its cases against the book publishers — and it's not doing great in the past couple of weeks — the floodgates are open. If those floodgates open, is your business tenable? I think again, summarizing the entire AI industry as chatbots and what one company is doing, I think, is a mistake. I think, again, video and media work very differently, and there are a lot of other considerations. A lot of the assumptions around how AI works that I've seen about video are based on opinions about cell phones in 1992. You're just probably very early on seeing the impact of how that technology will change the industry, and probably you've never experienced it before. So, I think part of what is going to happen over time is that a lot of these ideas around concern for copyright and other considerations will start to change as people understand how this actually works. I'll give you an example. I was at a dinner with a producer of a major show, one you've all probably seen. He was like, 'I'm very anti-AI.' I said, 'Okay, why are you anti-AI?' He's like, 'Well, because it works like this and it does this.' I was like, 'No, it doesn't. Let me show you how it works.' Then we showed him how it works, and he was like, 'Yeah, now I'm on board.' It took me like 25 minutes. I think he was very adamant about his position of being very against AI because I realized he just had the wrong expectations about what it did. I think it was a minute of like, okay, let me show you what it does. It's like you've never experienced this before. We forgot this, but we all had to go through training to send our first email. People were just telling you how to send an email, and you have to go through it. You don't just understand it, and so you start using it. You understand the limitations of it and the constraints of it, and then you start using it. I think a lot of the hard takes on AI these days are based on just the right expectations and the wrong assumptions of what it actually does. That gap between how artists feel about AI and how much they actually use it seems like it's getting bigger every day. It shows up on our site at The Verge. By the way, The Verge is built on the very foundation that I was right about my opinions about cell phones in 1992. [Laughs] One of few. But we see it, right? The people read the articles. I talk to product people at other companies. With Adobe, for example, the usage rate of generative AI in Adobe products is basically 100%. Generative fill is used as often as layers, which means everyone uses it every day, and then the audience is like, 'I hate this. Make it go away.' There's just this gap. It's a moral gap. It's a psychological gap, whatever it is. There's a gap between how people are using it, how they talk about it, and how they feel about it, particularly with creatives and artists. I know you spend a lot of time with creatives. How are you closing that gap? Is it possible to close that gap? I don't see that gap that often. I think in film, there's the idea of below the line and above the line. If you speak with a VFX artist, someone who's actually moving the pixels on a screen, they don't have weekends. They've never had a weekend off because when you're on a project, it's a very tough timeline with very small budgets. The director comes with notes, and you have to take the notes. It's a Friday, and there goes your weekend. You're going to be working on pushing those edits every day, and you're doing it by hand. So, if you have a tool that allows you to do it faster, of course, you will use it. It's great. It will get you where you need to go faster. I think the gap there is not as big as some people might think because the actual creative minds, the producers, the editors, and the VFX artists, are already embracing this. It is very valuable, and I guess I'm not surprised about your stats and numbers. I think still… Above the line, the people who think about creatives as, 'Oh, I have never had the experience actually working and seeing it,' might have a different assumption of how it works. Again, I think part of it is just that we need to show you how it actually works. Something we do is… We have a film festival here in New York, by the way, if anyone here wants to go. We've done it for three years now. It's in the Lincoln Center. It's a major event. It gathers filmmakers from all over the world. We started the festival with 300 submissions. This year, we got 6,000 submissions. We work with the American Cinema Editors, which is one of the guilds of the editors, and we work with the Tribeca Film Festival, so the industry partners. It's a great way of understanding how it's actually being used in real production use cases and how valuable it is for not only the insiders but also the new voices. I think part of the gap is that you need to go to a film festival to experience it, and you'll probably get a sense of how useful it is. The concern from that class of people that we hear all the time is, 'This is great. It made everyone's life a little bit easier. It also puts half of us out of work.' Do you see that as a real threat or as a real outcome? I understand the concern, but I think the obsession should be on people more than jobs. We used to have people who pressed buttons in elevators. That was a job. I don't know if you guys remember this. That was a job. There was a job of people throwing stones to wake you up before alarm clocks were invented. I think no one is saying we should protect people who throw rocks because of their job. We should have alarm clocks, and the person who's throwing rocks to wake you up should be taught how to do something else. So, you focus on the people and how you upskill, upgrade, learn, and teach people to do new things rather than like, 'Hey, let's keep this thing because we need people pressing buttons in elevators, and that's a job.' I think that has happened in Hollywood many times. In the beginning, Hollywood was silent. There were silent movies. Talkies came around. It was a major breakthrough where you could actually have sound in movies. The industry revolted. Charles Chaplin was one of the biggest advocates against films with sound because he said that sound would just kill the essence of filmmaking. An argument that they had was like, 'Who's going to pay the orchestras that are playing in the theaters?' Well, it's true. Yeah, we don't need orchestras in theaters anymore. But also, the technology gave birth to an entirely new industry of artists. Hans Zimmer, that was the beginning of an entirely new industry given by technology. I think this is, for me, very similar, where yes, we're going to lose some jobs. Our job should be to train those people to do new things with technology. Last question. If you had to spin that all the way out, you're successful; the AI industry can pull this off. The models get the capabilities you want them to have. What does the film industry look like 10 years from now? I think it looks very much like… It's not just TikTok? Are we just going to do Quibi? [Laughs] No, I mean, if someone likes making that, I don't think there's anything wrong with it. I think there are many independent voices out there who have never had the chance to tell their stories because they don't have the means to tell them. Our vision of Runway is that the best stories have yet to be told. We haven't heard from the greatest storyteller in the world because maybe they just weren't born in LA. That probably is the case, and so I think we're going to see a much more democratized version of film. We're going to have a version of storytelling that's for everyone, and the bar for it will be the ideas. It won't be who you know in the industry or how much money you have. It'll be how good the thing you want to say is and how good you are at saying it. Well, Cris, this has been amazing. You're going to have to come back on Decoder soon. Of course. Thank you for having me. Questions or comments about this episode? Hit us up at decoder@ We really do read every email! Decoder with Nilay Patel A podcast from The Verge about big ideas and other problems. SUBSCRIBE NOW! 0 Comments See More: AI Copyright Creators Decoder Law Podcasts Policy Tech YouTube