logo
AI's Napster Moment May Be Next

AI's Napster Moment May Be Next

Yahoo29-05-2025
Two years ago this month, the Writers Guild of America went on strike for, among other key issues, a set of landmark AI protections to safeguard our writing and our finished work. It took six months of sustained solidarity, picketing and an outpouring of public support — everyone from fellow unions, politicians and yes, even Elon Musk — before the studios finally met our demands.
Today, writers, journalists and creatives across every medium are facing a new kind of existential threat to their professions from the interests of Silicon Valley. Tech companies have been openly lobbying the White House to rewrite copyright law to freely train their models on films, television shows, articles, books and beyond without paying so much as a dime to studios, publishers or writers. OpenAI has been referring to this as 'the freedom to learn.' In New Jersey, it's just called theft.
More from The Hollywood Reporter
New Report Portrays California's Film and TV Production Environment as Uniquely Burdensome and Expensive
Kevin Costner Sued By 'Horizon 2' Stunt Performer Over Unscripted Rape Scene
Will Smith Music Video Makes Union Deal With IATSE After Strike
Three weeks ago, Shira Perlmutter, the Director of the U.S. Copyright Office, was fired shortly after publishing a new report on Generative AI Training. The release advocated for preserving current copyright law and signaled a setback for Sam Altman's craven attempt to expand the Fair Use doctrine into a smash and grab campaign on decades of copyrighted material. In other words, tech companies cannot use stolen intellectual property in their models.
In the meantime, dozens of ongoing lawsuits against OpenAI (ChatGPT), Anthropic (Claude), Google (Gemini) and Meta (Llama) making their way through the courts. Without a sympathetic judge willing to distort fair use, these companies could find themselves liable in billions of theft, potentially even more in damages, and in some cases, face total bankruptcy.
But we've been down this road before with Napster. The company launched as a free file-sharing app in 1999. Almost overnight people around the world could download entire music libraries in seconds. It was a groundbreaking use of technology that, at the time, felt too good to be true. And that's because it was. Napster was quickly sued out of existence for copyright infringement and racketeering.
Since then, every major media tech platforms — YouTube, Spotify and even OpenAI until recently — have been making deals with studios, publishers and labels to compensate them for using their copyrighted work. While many of these agreements have been deeply flawed and exploitive, the firing of Shira Perlmutter represents a pivot to something much worse.
Silicon Valley is now asking the government for permission to steal our copyright in order to escape their pending lawsuits. They are knowingly trying to de-value our professions and countless others in order to freely enrich themselves without caring about the consequences.
In the case of screenwriters, copyright of an original script is sold to the studios in exchange for core benefits like health care, pension and residuals. This is one of the bedrock principles that has helped sustain writers during the highs and lows of this business. But if copyright protections were to be stripped away, then so begins the unraveling of the entire value proposition that union members past and present have fought so hard to maintain in the film and television industry.
In April, Sam Altman appeared on stage with Chris Anderson, the head of TED, who pointed out, 'At first glance, this (ChatGPT) looks like I.P. theft.' The audience erupted in applause until Sam fired back, 'Clap about that all you want. Enjoy.' And then Sam said something far more revealing, 'We do need to figure out some new model around the economics for compensating creative output. Exactly what that looks like I'm not sure.'
But the copyright system isn't broken. Companies like OpenAI simply don't want to negotiate consent and pay. So, Sam can shrug about 'the economics for compensating creative output' all he wants. Enjoy.
The solution will always be the same. Stop stealing writers work and follow the law.
Larry J. Cohen and Sarah Montana are the co-chairs of the Writers Guild of America East AI Task Force. Cohen is a writer, producer and director who currently serves on the WGAE council. Montana is a writer and public speaker who currently serves on WGAE council.
Best of The Hollywood Reporter
How the Warner Brothers Got Their Film Business Started
Meet the World Builders: Hollywood's Top Physical Production Executives of 2023
Men in Blazers, Hollywood's Favorite Soccer Podcast, Aims for a Global Empire
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI agents aren't the ‘new Google,' says Airbnb CEO
AI agents aren't the ‘new Google,' says Airbnb CEO

Yahoo

time21 minutes ago

  • Yahoo

AI agents aren't the ‘new Google,' says Airbnb CEO

After a second-quarter earnings beat, Airbnb CEO Brian Chesky shared his thoughts on the company's AI strategy, cautioning investors that AI chatbots can't yet be thought of as the 'new Google.' That is, AI chatbots, while potentially driving new leads to the travel and services business, aren't entirely a replacement for the referrals that the dominant search engine brings. At least not at this time. 'I think we're still kind of feeling out the space,' the exec told investors on the Q2 earnings call. 'The thing I want to caution is I don't think that AI agents — I don't think we should think of chatbots like Google — I don't think we should think of them as the 'new Google' yet.' This, Chesky explained, is because AI models aren't 'proprietary.' 'We also have to remember that the model powering ChatGPT is not proprietary. It's not exclusive to ChatGPT. We — Airbnb — can also use the API, and there are other models that we can use,' he said. Painting a broader picture of the AI landscape, Chesky said that in addition to chatbots and other AI agents, there will be custom-built startups designed for specific applications, as well as other incumbents that have made the shift to AI. 'One of the things we've noticed is it's not enough to just have … the best model. You have to be able to tune the model and build a custom interface for the right application. And I think that's the key,' he said. The company told investors it will look to take advantage of AI in a number of ways. Airbnb shared during the call that its AI customer service agent in the U.S. reduced the percentage of guests contacting a human agent by 15%, for instance. This was actually harder than tackling the lower-hanging fruit involving travel planning and inspiration, Chesky said, because AI agents performing customer service can't hallucinate. They have to be able to be accurate and helpful at all times. Airbnb's customer service agent was built using 13 different models and trained on tens of thousands of conversations, and is currently available in English in the U.S. This year, Airbnb will roll it out to more languages, and next year, it will become more personalized and agentic. That means it would be able to understand if someone reaches out to cancel a reservation; it would not only be able to tell them how to do so, but it could also do it for them. The agent could also help plan and book trips. Plus, AI will come to Airbnb's search next year, the CEO said. However, the company has not fully fleshed out its plans for working with third-party AI agents, although it's considering it. Users still need an Airbnb account to make a booking. Because of this, Chesky doesn't think agentic AI would turn its business into a commodity, the way that booking flights has become. Instead, he sees AI as 'potentially interesting lead generation' for the company. '… I think the key thing is going to be for us to lead and become the first place for people to book travel on Airbnb. As far as whether or not we integrate with AI agents, I think that's something that we're certainly open to,' he said. Airbnb beat analysts' expectations in the quarter with revenue of $3.1 billion and earnings per share of $1.03, but the stock dropped on its forecast of slower growth in the second half of the year. Sign in to access your portfolio

Katie Miller is leaving Musk World to start a podcast for conservative moms
Katie Miller is leaving Musk World to start a podcast for conservative moms

Yahoo

time21 minutes ago

  • Yahoo

Katie Miller is leaving Musk World to start a podcast for conservative moms

Katie Miller, once the communications lead for DOGE, is ending her full-time work with Elon Musk. Miller is launching her own podcast for conservative women. The podcast will air weekly and feature people "across the political spectrum." Goodbye, Elon Musk. Hello, conservative women. Katie Miller, a former face of DOGE-world and Musk aide, posted a video on X on Thursday morning announcing that she's starting the "Katie Miller Podcast" aimed at conservative women. She's ending her full-time work with Musk to launch the venture. "For years I've seen that there isn't a place for conservative women to gather online," Miller said from her plant-lined living room. "There isn't a place for a mom like me, mom of three young kids — four, three, and almost two — and a wife, and trying to do a career, eat healthy, work out." Miller said the podcast will focus on lifestyle, news, and gossip, and that she plans to talk to people "across the political spectrum," business leaders, and celebrities. It will air weekly on Mondays. The only mention of Musk comes more than halfway through the short video, when Miller talks about her career in communications and government. "Most recently, I'm concluding my time working full-time for Elon Musk," she said. Miller was at Musk's side during the height of his DOGE-era, and served as the venture's spokesperson. Both Miller and Musk left their government work in May, and she continued working for the former face of DOGE in the private sector. As recently as last month, Miller was sending out communications for xAI, one of Musk's companies. Miller and Musk did not immediately respond to BI's request for comment. Miller is married to Stephen Miller, the White House's deputy chief of staff. Musk and President Donald Trump had a high-profile falling out in June over the president's "Big Beautiful Bill." On X, Miller has posted about popular lifestyle themes in conservative circles, including falling birth rates and whole foods, a prominent part of the Make America Healthy Again ethos. Read the original article on Business Insider Solve the daily Crossword

Why OpenAI's open-source models matter
Why OpenAI's open-source models matter

Fast Company

time23 minutes ago

  • Fast Company

Why OpenAI's open-source models matter

Why OpenAI's new open-weight models matter OpenAI is opening up again. The company's release of two 'open-weight' models —gpt-oss-120b and gpt-oss-20b—this month marks a major shift from its 2019 pivot away from transparency, when it began keeping its most advanced research under wraps after a breakthrough in model scaling and compute. Now, with GPT-5 on the horizon, OpenAI is signaling a return—at least in part—to its original ethos. These new models come with all their internal weights exposed, meaning developers can inspect and fine-tune how they work. That doesn't make them 'open-source' in the strictest sense—the training data and source code remain closed—but it does make them more accessible and adaptable than anything OpenAI has offered in years. The move matters, not just because of the models themselves, but because of who's behind them. OpenAI is still the dominant force in generative AI, with ChatGPT as its flagship consumer product. When a leader of that stature starts releasing open models, it sends a signal across the industry. 'Open models are here to stay,' says Anyscale cofounder Robert Nishihara. 'Now that OpenAI is competing on this front, the landscape will get much more competitive and we can expect to see better open models.' Enterprises—especially ones in regulated industries like healthcare or financial—like to build on open-source models so that they can tailor them for their needs, and so they can run the models on in-house servers or in private clouds rather than undertaking the high cost and security risks of sending their (possibly sensitive or proprietary) data out to a third-party LLM such as OpenAI's GPT-4.5, Anthropic's Claude, or Google's Gemini. OpenAI's oss models are licensed under Apache 2.0, meaning developers can use, modify, and even commercialize them, as long as they credit OpenAI and waive any patent claims. None of that would matter if the models weren't state of the art, but they are. The larger gpt-oss-120b (120 billion parameters) model matches OpenAI's o4-mini on core reasoning benchmarks while running on a single graphics processing unit (GPU), OpenAI says. The smaller gpt-oss-20b model performs on par with the company's o3-mini, and is compact enough to run on edge devices with just 16 GB of memory (like a high-end laptop). That small size matters a lot. Many in the industry believe that small models running on personal devices could be the wave of the future. On-device models, after all, don't have to connect to the cloud to process data, so they are more secure and can keep data private more easily. Small models are also often trained to do a relatively narrow task (like quality inspection in a factory or language translation from a phone). The release could also accelerate the broader ecosystem of open AI infrastructure. 'The more popular open models become, the more important open-source infrastructure for deploying those models becomes,' Nishihara says. 'We're seeing the rise of open models complemented by the emergence of high-quality open-source infrastructure for training and serving those models—this includes projects like Ray and vLLM.' There's also a geopolitical subtext. The Trump administration has increasingly framed AI as a strategic asset in its rivalry with China, pushing American companies to shape global norms and infrastructure. Open-weight models from a top U.S. lab—built to run on Nvidia chips—could spread quickly across regions like Africa and the Middle East, countering the rise of free Chinese models tuned for Huawei hardware. It's a soft-power play, not unlike the U.S. dollar's dominance as a global currency. Google's new Genie 3 world models could wild new forms of gaming, entertainment With the right prompt, AI models can generate words, voices, music, images, video, and other things. And the quality of those generations continues to grow. Google DeepMind has pushed the boundaries even further with its 'world models,' capable of generating live, interactive environments that users can navigate and modify in real time. Words alone don't fully capture the capabilities of DeepMind's new Genie 3 model. A demo video shows a number of lifelike worlds (a desert, a scuba diving scene, a living room, a canal city, etc.) At one point, the user adds a whimsical element to the 'canal city' world by writing the prompt: 'A man in a chicken suit emerges from the left of the shot and runs down the towpath hugging the wall.' And the man in the chicken suit immediately appears in the world. Then, the user drops a dinosaur into the nearby canal. Splash. The most obvious application of this kind of AI is in gaming, where a model could generate an endless stream of environments and game scenarios for the gamer. It's a natural research focus for DeepMind, which focused its early AI research on video game environments. The potential for world modeling is enormous. Future versions of the Genie model could enable 'choose your adventure' experiences in video or AR formats, where storytelling adapts dynamically to the viewer's preferences, interests, and impulses. As Google notes, companies working on self-driving cars or robotics could also benefit, using these models to simulate real-world conditions that would be costly or impractical to recreate physically. The AI industry responds to AI tool abuse by students As the new school year approaches, educators and parents continue to worry that students are using AI tools to do their schoolwork for them. The danger is that students can rely heavily on AI to generate answers to questions, while failing to learn all the contextual stuff they would encounter during the process of finding answers on their own. A growing body of research suggests that relying on AI harms overall academic performance. Now OpenAI and Google have each responded to this worrisome situation by releasing special 'study modes' inside their respective AI chatbots. OpenAI's tool is called ChatGPT ' study mode,' while Google offers a similar feature within its Gemini chatbot called Guided Learning. The tools' format and features seem remarkably similar. Both break down complex problems into smaller chunks and then walk the student through them using a question-and-answer approach. Google says its questions are designed to teach students the 'how' and 'why' behind a topic, encouraging learning throughout the exchange. OpenAI says its tool uses 'Socratic questioning, hints, and self-reflection prompts to guide understanding and promote active learning.' Both OpenAI and Google say that the teaching approach and format are based on research by learning experts. Still, the student is ultimately in control of what AI tools they use. OpenAI says that users can easily toggle between regular chatbot mode and the study mode. Google says it believes students need AI for both traditional question searches and for guided study. So these new learning tools may provide an alternative mode of learning using AI, but they're not likely to significantly shift the argument around AI's threat to real learning. More AI coverage from Fast Company: Google wants you to be a citizen data scientist Reviving this government agency could be the key to U.S. tech dominance Cloudflare vs. Perplexity: a web scraping war with big implications for AI What the White House Action Plan on AI gets right and wrong about bias Want exclusive reporting and trend analysis on technology, business innovation, the future of work, and design? Sign up for Fast Company Premium. The early-rate deadline for Fast Company's Most Innovative Companies Awards is Friday, September 5, at 11:59 p.m. PT. Apply today.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store