logo
#

Latest news with #KateKnibbs

Join Our Next Livestream: Inside the AI Copyright Battles with WIRED Reporters
Join Our Next Livestream: Inside the AI Copyright Battles with WIRED Reporters

WIRED

time11-07-2025

  • Entertainment
  • WIRED

Join Our Next Livestream: Inside the AI Copyright Battles with WIRED Reporters

Jul 11, 2025 1:40 PM Curious about generative AI and copyright? Subscribers can join WIRED live on July 16 as we answer your questions about this critical topic. Photo-illustration: WIRED Staff; Getty Images What's going on right now with the copyright battles over artificial intelligence? Many lawsuits regarding generative AI's training materials were initially filed back in 2023, with decisions just now starting to trickle out. Whether it's Midjourney generating videos of Disney characters, like Wall-E brandishing a gun, or an exit interview with a top AI lawyer as he left Meta, WIRED senior writer Kate Knibbs has been following this fight for years—and she's ready to answer your questions. Bring all your burning questions about the AI copyright battles to WIRED's next, subscriber-only livestream scheduled for July 16 at 12pm ET / 9am PT, hosted by Reece Rogers with Kate Knibbs. The event will be streamed right here. For subscribers who are not able to join, a replay of the livestream will be available after the event. You can help us prepare by submitting any questions you have before the livestream here, or by leaving a comment below. Kate Knibbs and Reece Rogers answer your questions at our next livestream on July 16, 2025 at 12pm ET / 9am ET.

You Asked, We Answered: All of Your AI Angst
You Asked, We Answered: All of Your AI Angst

WIRED

time10-07-2025

  • Entertainment
  • WIRED

You Asked, We Answered: All of Your AI Angst

Photo-Illustration:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. This week, our host Lauren Goode, along with two of our senior writers, Kate Knibbs and Paresh Dave, dive into the show's inbox to answer listeners' questions. We look into a range of queries—from how AI is shaping the film industry to brainstorming how the Jony Ive and Open AI's collaboration could look like. Mentioned in this episode: This Viral AI Chatbot Will Lie and Say It's Human by Lauren Goode and Tom Simonite A Political Battle Is Brewing Over Data Centers by Molly Taft You can follow Lauren Goode on Bluesky at @laurengoode, and Kate Knibbs on Bluesky at @knibbs, and Paresh Dave on Bluesky at ‪@peard33 Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Lauren Goode: This is WIRED's Uncanny Valley , a show about the people power and influence of Silicon Valley. I'm Lauren Goode. I'm a senior correspondent at WIRED. Today we are bringing you a different kind of episode. Mike and Katie are out this week on well-deserved vacations. So with the help of our Uncanny Valley producers, I went deep into the show's inbox to see what you all were curious about. You've been sending us some really great questions. So we chose five excellent questions ranging from how AI has impacted the film industry, to what it means for our healthcare future when chatbots are spitting out false information, to what we can expect of the much talked about Jony Ive and Sam Altman collab. I was determined to find good answers and I didn't think that I could answer them all on my own. So I enlisted the help of two brilliant colleagues at WIRED to help me answer your questions. Kate Knibbs: My name is Kate Knibbs. I'm a senior writer at WIRED. Paresh Dave: And I'm Paresh Dave, senior writer at WIRED. Lauren Goode: Hey Kate, how are you doing this morning, this afternoon? What time is it? Kate Knibbs: Time is just a concept, Lauren. And I'm good. I am planning on eating ice cream later today, so that's sort of the prize that I have my eye on. Lauren Goode: Why later? I mean, why not just eat it now on the show? Kate Knibbs: Because I have to go to a doctor's appointment. I'm very pregnant and they're going to weigh me, so I'm going to eat the ice cream after I get weighed. It's sort of a ritual I have. Lauren Goode: You've got it all worked out. Like a ritual, as in you typically eat ice cream after the doctor's appointments? Kate Knibbs: Yes, and in between the doctor's appointments, to be clear, but always after. Lauren Goode: I love this. This is like when you're a little kid and your mom drags you to the bank and they give you a lollipop afterwards for being so patient. Kate Knibbs: Exactly. Bringing that into our adult lives. Lauren Goode: That's wonderful. But for now, I do have to ask you to answer some burning questions. Thank you so much for being game to come on Uncanny Valley and dig into the mailbag. I'll read to you the first question and we'll go from there. The first one comes from Janae, who was writing to us from London, and she says, "One topic that recently piqued my interest was the impact of AI on the film industry. I was just reading an article around how AI is impacting how films are made and the trade-offs between the use of AI relating to creativity and to budgets. Kate, what would you say broadly, beyond that, are the main changes you've seen to how films and TV shows are made in Hollywood now that generative AI has entered the scene? Kate Knibbs: So I think looking at how AI is changing film and television is a great barometer for how quickly AI is advancing because it's really already being used in every step in the production process, and it's not a fringe thing at all. Like The Brutalist, which was the best picture nominee last year, used AI in several different ways. Lauren Goode: Oh, wow. I didn't realize that. I just watched it. Kate Knibbs: Yeah, it was like a minor controversy. I think it was making sure Adrien Brody's pronunciation was correct, they used AI to do that. But yeah, so it's very much already embedded in mainstream Hollywood filmmaking and also in distribution. When you watch a foreign language show on Netflix, if you're watching it dubbed, you are watching AI integrated into the process because Netflix uses AI to dub foreign language versions of its shows. So if you want to watch Squid Game in English so that they're saying words in English versus subtitles, which just read the subtitles in my opinion. But that's neither here nor there. If you're watching the dubbing, you're watching AI voices. Lauren Goode: Fascinating. Kate Knibbs: Yeah, yeah. Lauren Goode: Does that mean AI is actually being used so that the mouth movements of the actors are matching the dubbing? Kate Knibbs: Yes, they're trying. It looks really weird. Lauren Goode: I had no idea. That is fascinating. And there are entire companies and teams of human beings who in the past and hopefully still, are dedicated to localization, to making that happen. But now it's happening with AI. Kate Knibbs: Yeah, they might not be having the best year. And then another part of the film in television industry that AI has made a big impact on, is storyboarding, which is when they're developing a TV show or movie, they hire visual artists to sort of sketch out how the sequences will look. I've talked to a lot of visual artists in that field who say that that whole field is getting completely wiped out basically because it's so easy to have image generators mock up storyboards now. And even really big name action movies that you've probably heard of involving superheroes are using that kind of technology. So those are just a few examples, but basically anything you can think of is there's some sort of experiment being done with GenAI tools. Lauren Goode: And what does all of this mean for the Hollywood labor market? Obviously AI was a big topic of contention during the strikes a couple of summers ago. Where are we now? Kate Knibbs: So there's not one monolithic response to AI, but I think the fact that it was such a point of contention is really indicative of how a lot of people in crew and actors and actresses feel, which is threatened, because this technology really, it will augment some jobs for sure, but it is already, as I talked about with the storyboarding, replacing some work that was formerly done by humans. And so there's a lot of pushback. There's a lot of, I think, valid trepidation. Lauren Goode: Right. The sense is that it's really going to benefit the studios, their bottom line versus the workers, the character actors. Kate Knibbs: Definitely will. There are, I will say though, I actually talked to a group of documentary filmmakers a few weeks ago who are all very interested in incorporating AI into their processes and already are. And on the director and producer side of things, there are some really prominent directors and producers who are also embracing this tech. Darren Aronofsky, the director of Black Swan and Requiem For A Dream, has a AI film studio and he has a partnership with Google's DeepMind, and I'm sure whatever he does with that is going to be just as upsetting as Requiem For A Dream in a different way. Lauren Goode: Did I also ever tell you about my bus ride in Lisbon with Darren Aronofsky? Kate Knibbs: No. What happened? Lauren Goode: I had interviewed him at a conference. Darren strikes me as someone who's always been pretty tech forward, and at the time, this was 2018 or 2019, it was at a conference in Lisbon, Portugal, and he was doing a lot in VR. So I interviewed him on stage at this conference about that, and then afterwards, a bunch of us were going to the same dinner and he and I and another person ended up on a bus that took forever to get across Lisbon to this dinner. So I was stoked as a journalist, thinking, "I'm literally sitting in the back of the bus with Darren Aronofsky and I get to ask him all these questions." I'm sure he was like, "Get me the hell off of this bus." And then we got to the dinner and he introduced me to the giant wave surfer, Garrett McNamara, who was in town because he was surfing those giant waves in Nazare. And I have to tell you, Kate, I've never had anything feel so much like a fever dream. Kate Knibbs: I was mildly envious of the Aronofsky thing, but the big wave surfers are my heroes, so that's very cool. He should make a movie about them. Lauren Goode: HBO already did this fantastic 100 foot wave docu-series about them. But yeah, soon enough that'll all be replaced with AI. There'll be like 150-foot waves just completely propped up with AI. Kate Knibbs: See, no, that's what you can't replace. You can't replace Mother Nature, baby. Lauren Goode: You cannot. You cannot. So you mentioned that Darren is doing a partnership with Google DeepMind. A couple of months ago at Google's annual software conference, they showed off this Veo 3 video tool, and this is just crazy. It is crazy how good this tool is. I mean, still those of us with sort of course skilled at picking out AI content versus real content, it's still a little uncanny, but this just felt like a huge leap. So I was hoping to get a sense of whether or not this is something that hobbyists are going to use, are professionals going to start using this, are all the filmmakers going to start using this? Where do you see it going from here? Kate Knibbs: I think that both hobbyists and professionals are going to use Veo and the tools like it that come after, I think they already are probably doing really interesting things with it. And I feel, I mean, pretty ambivalent because I'm sure there's going to be a really cool movie coming out that wouldn't otherwise exist because a young filmmaker working in their bedroom suddenly has access to this suite of tools and that's cool. But then I do think the rise of this technology is inevitably going to be accompanied by loss of jobs, loss of skilled labor, a change in the industry that will be bittersweet for some and just plain bitter for others. And I don't think there's a way to stop it really. Maybe the copyright lawsuits will slow things down. But the idea that there might be some sort of needle that could be threaded where this tech will roll out, but people will keep all their jobs, I just think is unfortunately unrealistic as much as I'd like it to be so. It's a thing that's going to bring a lot of beauty and it's a thing that's going to bring a lot of misery. Lauren Goode: Well, that is a sobering answer, Kate, but hopefully we were able to answer at least part of Janae's question. Let's go to the next one. This one comes from Elizabeth. It's also about AI, and Elizabeth asks us the following. "I work in a field where misinformation and conspiracy theories are replacing scientific evidence online. What happens when the next AI models begin learning from the new less scientifically valid data? I'd love to know how LLMs are trained, if anyone in the field is concerned about the effects of replacement of scientific data online. And if so, what are they doing about it?" I mean, I think it's safe to say concern abounds. There's a lot of concern. We are officially concerned. Kate Knibbs: Yeah, we are definitely officially concerned. To break it down a little bit like when it comes to how LLMs are trained. LLMs, as much as we yearn for AGI, they don't really think creatively. Right now, LLMs work by spitting out plausible sounding sentences. The way that they do that is that they are trained by a process where they're exposed to vast data sets of the written word. And this ranges from Anna Karenina and classical literature to the most disturbing forums on 4Chan and Reddit, you've ever seen. Ranges from the best scientific data to the worst scientific data. The models ingest these data sets and then they're instructed to look for language patterns. So they begin to learn how to predict the most probable next words in sentences. That's the first part of the training. Then there are all these refinement processes that take place, like fine-tuning and prompt-tuning. That's when the people who are making the LLMs tailor them for whatever use cases they imagine and do stuff like remove horrible, violent or sexually explicit content. This is where they put all the guardrails in as well. Lauren Goode: And that fine-tuning can also be done to create sort of a precision model. Like if you're building a model just for one specific healthcare application, then you might want to fine-tune the tool for that. Kate Knibbs: Yeah, definitely. And so the moment that an LLM becomes vulnerable to absorbing inaccurate scientific data is really the moment that the LLM ingests any of that data. Because it will really depend on how well the fine-tuning works, whether it's able to properly ignore misinformation or whether it accidentally just becomes this machine that spews misinformation. So the problem is right there in the beginning. Now that there are some people who think that maybe LLMs don't need to ingest as large of data sets as we originally thought. There might be a movement to cull the pre-training and training data so that the misinformation isn't there from the beginning. But for the most part, the sheer quantity of words that these models are trained on means that they're definitely trained on at least some bullshit. Lauren Goode: And it seems like when you are thinking about the potential effects of LLMs spitting out false information, either because they've been trained on false data, they've been programmed in such a way, the stakes are especially high in healthcare. I was just reading about how a group of medical researchers were able to very easily configure popular AI chatbots to spit out false information. And not just bad info, but also to sound very authoritative, which is what we know these chatbots do. And look, this is not a new idea. People have been able to jailbreak these chatbots for testing purposes. Even at WIRED, a little while ago, an editor and I were able to program a popular customer service chatbot to lie and tell callers that it was human. So people are doing this because they are trying to red team them and make them better. But in this instance, the researchers were basically able to tailor the chatbots to incorrectly answer questions like, "Does sunscreen cause skin cancer? Does 5G cause infertility?" Stuff that you can imagine people going online and searching for. I think one of the chatbots, Claude, which is done by Anthropic, refused more than half the time to spit out false information. The others just put out fake answers 100% of the time. So I guess my question is, at the breakneck speed that these companies, whether it's a small startup or a big startup or one of the frontier model companies, the speed at which they're moving to put out the next best AI model, what incentive do they have, do you think they have, to put in guardrails to avoid spreading this misinformation? Kate Knibbs: So I think that AI companies that are creating specialty tools for the healthcare industry will be far more incentivized than the companies that are creating general interest models, because there's already so much market saturation with companies that have general interest models out there. Frankly, it hasn't really hurt them that much that they're spewing out all of this bad information. So it would take some larger scandals, I would say, for them to really start focusing on creating dedicated, no scientific, misinformation team, although I would love to see that happen. Now, there's a whole industry devoted to more tailored LLMs, and those include LLMs that are made for doctors. Those companies have obviously major incentives to make sure that they're not spewing misinformation because that's sort of their whole sales pitch, is like, "We are offering you a more precise, more accurate, safer version of an LLM that's designed to be medically accurate." So for instance, Google has its own medical question LLM actually, called Med=PaLM, which is very cool. I do wonder right now though, how many people in healthcare are using that versus just firing up ChatGPT? I really don't know. But the fact of the matter is LLMs spewing scientific misinformation is a problem. It's a problem that's going to be exacerbated by them then ingesting the scientific misinformation that they spew. The quality collapse that will happen when bad synthetic data begets more bad synthetic data is a real concern. And so the bottom line here is always exercise caution when you're using LLMs, even when it's a tailored LLM that's supposed to be more medically accurate. Because there's never going to be a perfect fine-tuning process, and all guardrails thus far have been proven flawed or inadequate in some way. So stay woke, basically. Lauren Goode: I had the opportunity to ask a tech CEO not long ago, and I can't say who, because the story will be coming out at some point on but this person seemed very optimistic about the potential for AI in healthcare. Because of the way that it can help treat people holistically because it has access to all of this information. If you're seeing a specialist, you're seeing a kidney specialist, you're seeing an endocrinologist, as doctors and nurses and healthcare professionals become specialized, you lose access to maybe the wealth of knowledge that your longtime family GP might be able to have access to or be able to treat you in the same way. And I thought that was a rather optimistic answer. We determined that this person I was talking to is kind of a supreme technology optimist and I consider more of a realist. But if we have to leave Elizabeth with something uplifting, maybe it's that. Kate Knibbs: Yes. I'll keep my skepticism about several of the points that they made to myself then just, yes, live your best life, Elizabeth. But maybe just ask a human, if you have a really, really pressing medical question at this point. Lauren Goode: Check the citations. Kate Knibbs: Yes. Lauren Goode: Kate, thank you so much for being on Uncanny Valley . Kate Knibbs: Thanks so much for having me. Lauren Goode: We're going to take a quick break and when we come back, we're going to hear from WIRED's Paresh Dave, who is going to help answer the next batch of listener questions. So stick around. Welcome back to Uncanny Valley . Today we're answering questions from our inbox, which is to say we're answering your questions. My colleague Kate Knibbs was just here telling us about the different effects that AI is already having on the film industry and what happens when LLMs are trained on false data. For the next two listener questions, I realized that my colleague Paresh Dave, who sits across from me in the newsroom, was the perfect person to answer them. We are recording from separate locations today, so let's give Paresh a ring. Hey, Paresh, how's it going? Paresh Dave: Good. Great to be here. Lauren Goode: So I have one question sent by one of our listeners, Andrew, that is right up your alley. And Andrew says the following, "One story that I'm trying to understand and evaluate, is the current and near to mid-future for Mozilla and Firefox. I have been a supporter of Mozilla for a long time and a Firefox user since version 1.1. That's a flex. But the current Google antitrust case," Paresh, you've been covering this closely. "The cryptic changes to the privacy policy earlier this year and the subsequent rollback, the sudden shuttering of Pocket and Fakespot, which were Mozilla products. If the organization is a force for good, which I believe it is, and having an independent browser is important, which I believe it is, what's the landscape going forward?" I guess to start, Paresh, what actually differentiates Mozilla, Firefox from other browsers? Paresh Dave: The number one thing is that it's not made by Google, Google with Chrome, which is the largest browser out there in terms of usage. Whereas Firefox is one of several browsers that promote this idea of providing a more private experience because it's not controlled by one of the giant tech companies that's trying to sell you ads or subscriptions. It can sometimes offer cutting edge features faster than Google can because Google has a lot of users to contend with when it rolls stuff out. That said, Mozilla is still trying to make money and it does partner with Google. And I appreciate Andrew mentioning that Mozilla has been a force for good because it is under this nonprofit umbrella that does try to be a force for good out there. But that for-profit side, which includes the Firefox browser, runs into all the challenges of any traditional business. Lauren Goode: As Andrew, our listener, mentioned, that whole landscape is changing and running the business of Mozilla is getting more difficult. How so? Paresh Dave: Yeah, well, you mentioned the antitrust case. So Google pays Mozilla to promote the Google search engine inside the browser. It's an important source of traffic for Google and a significant, obviously a vital part of how Mozilla funds those altruistic ventures. And those altruistic ventures include this independent, semi-independent research into online communities, all the consequences of using technology, really important work that's funded by Mozilla grants. But that deal between Google and Mozilla is among several that have been ruled by a US judge as anticompetitive. And sometime in the next few months that judge is going to issue some sort of order that will almost certainly restrict those deals. We don't know for sure that Mozilla's going to lose that money, but it seems likely that sometime in the next few years after maybe appeals are exhausted, Mozilla will need to find some sort of way to make up that loss of Google's financial support. And yeah, Mozilla has been making cuts, like shutting down this popular but niche service Pocket, a personal library where users could save content to consume later on. It was something I used. I had to download all of my data from Pocket just the other week. Lauren Goode: I happen to never have used Pocket. I'm a big Instapaper user, but people were very upset on the internet about this. Paresh Dave: Yeah, I mean there is an important question. How do companies make money from this? It wasn't something I was paying for. I'm not sure how much revenue, if at all, Mozilla is making for it. If anything, it's saved it for a few more years. And Mozilla, another cut, Fakespot, a tool for identifying how many reviews on a certain Amazon product listing are fake. Another thing I loved and used a lot but wasn't paying for and it's just emphasizing Mozilla has to figure out a future pathway here. Lauren Goode: Is what's going on with Mozilla in any way reflective of what is going on with the web more broadly, how it's changed, or any pressures that are being felt around the tech ecosystem right now? Paresh Dave: Absolutely. There's the antitrust stuff we just mentioned, but there's also this AI gold rush happening right now. So it's this idea that browsers, like every other company, will have to find a way to harness AI to better serve their users. And that is what Mozilla and Google are contending with. I just met with Opera, a Mozilla rival that's testing right now with a few dozen people and experience that lets AI do the browsing for you. It's a fascinating idea where you type in a command like "Go buy socks" and the browser goes off to put some in your cart and makes the purchase for you. Now in the demo that I got of this the other day, it was attempting to buy the wrong size socks for this person that I was speaking to. So it's not perfect, but Mozilla is exploring similar tech and this all remains expensive to develop and expensive to operate. So it's not only going to add pressure on Mozilla, it's to save costs, explore new ways of making money, but also do all that while balancing, as Andrew said, being a force for good. I don't see how it's going to be easy for the next few months, let enough next few years for Mozilla. Lauren Goode: While fully acknowledging that we are in the early stages of generative AI, it really feels to me emblematic that the example you just described is basically burning the earth with the amount of energy that it uses to use agentic AI in a lesser known browser to order someone the wrong pair of socks. Like what a time. Paresh Dave: That is where we are at right now. And I was speaking with a pretty high up Google executive recently who also said that these technologies are nowhere near sort of the level of capability to roll out to masses of people, but this is where the web is going. Where AI systems handle more of the work for you, is it ultimately the right thing for society? Debatable, but it certainly feels like it's a train that can't be stopped. Lauren Goode: All right, thank you for that thoughtful answer. And Andrew, I hope we answered your question. And our last question before we take a break comes from Brian who wrote to us saying, "It seems like a lot of NVIDIA GPUs will be needed for the huge AI data centers, which are projected by the likes of Google, OpenAI, et cetera. How long might it take to produce them? How many electricians will it take to wire them up? I feel like there's so much bullish talk about AI and not a lot about the potential hardware bottlenecks." Paresh, let's just take a step back. Why are NVIDIA chips so important for any kind of AI operation? Paresh Dave: So the chips are very good at handling multiple tasks at once, which makes them ideal for training these highly interconnected AI models like those behind systems like ChatGPT. And the idea is that NVIDIA has made it really easy to program software to do that training. So they have this thing called CUDA that is made very simple for developers to send a bunch of data to these NVIDIA GPUs and have NVIDIA GPUs make important connections that then form these models. And everyone wants more of these chips. Any interview I go to with folks here in Silicon Valley right now, they're like, "Well, if our budget allowed it, we would buy a thousand more GPUs." Whatever it is. And some customers still do face backlogs for access to the newest versions. But over the past couple of years, I would say the chips haven't been the only bottlenecks. And so now it's been generators, it's been power systems, even just basic wiring and cabling have been caught up in post-pandemic supply chain backlogs, tariffs, just not enough to meet the feverish demand out there. Lauren Goode: And also, I mean NVIDIA is not the only player in this game. They certainly have an overwhelming share of the AI GPU market right now, but there's also advanced micro devices, AMD, which has been around for a very long time, most known for CPUs, but now has a growing AI accelerator business Paresh Dave: OpenAI itself, which has become this big behemoth in AI. They have their own chips team that's probably running in a hundred different directions, but they've hired a lot of great folks from other companies. And so that's another thing to be paying attention to. OpenAI could end their reliance on NVIDIA chips. Lauren Goode: Right. So yes, there are others out there, all sort of chomping at the heels of NVIDIA. But another thing that we should talk about that you mentioned earlier is just the enormous amount of energy these data centers need to function. How are companies preparing for this massive need for resources? Paresh Dave: Without a doubt, electricity access is the biggest issue holding up data center construction, they speak to folks in the data center industry regularly and that hasn't changed over the past year. That's the number one obstacle. And these companies that are building the data centers often have committed to using renewable or carbon-free energy to power their data centers which limits where they can build them. And they're facing this tricky dilemma because they're getting criticized by activists and outside groups and sometimes their own employees for trying to build data centers in places where they're going to be reliant on less clean fuel sources. And this is all coming at the same time that local pushback to data center projects is kind of growing. There was a report from Data Center Watch a sort of new group that's formed that found local opposition to data centers has blocked or delayed development in many places across the country recently. AI electricity demand, according to a Bloomberg NEF analysis, is expected to triple over the next decade. So yeah, I would say the electricity bottleneck and the political bottleneck that comes with that, may very well be the thing that we're talking a lot more about a year from now. Lauren Goode: And we should note that the AI accelerationists hate the fact that activists are watching this and they're spending a lot of money on lobbying efforts too, to try to make their way around regulation or enact their own ideas for regulation that would allow AI companies to move as quickly as possible despite the concerns about energy and frankly other concerns too. Paresh Dave: Yeah, I mean we just saw an effort like that fail where tech companies had hoped in the one Big Beautiful Bill that President Trump signed into law last week that there would be a ten-year moratorium on state's ability to enact AI regulations and ultimately that provision was removed from the bill before President Trump signed it. So states can continue to regulate this area in sort of a victory for those activist groups. Lauren Goode: Paresh, thank you so much for these thoughtful answers. Paresh Dave: No problem. I'd love to be back all the time. Lauren Goode: All right, well don't go anywhere because we're going to take a quick break and then when we come back, I'm going to ask you for your take and Kate's take on one of the most hyped up Silicon Valley collabs that we've seen in a long time. Kate, Paresh, thank you so much for teaming up with me this week to answer these listener questions. I have one last lightning round question for the both of you. I want your hot takes. Our listener Shane wanted to know our thoughts on the Jony Ive and Sam Altman team up. In case you haven't heard the news, back in May, OpenAI and Jony Ive, the designer, the famed Apple designer behind the iPhone, announced that they were teaming up to create a new kind of device that they promised to ship out faster than any company ever has. This is supposedly a new device for the AI era. The details of what this gadget will be or how it will look, still not public, we're still not really sure, but we can place our bets. What do you both think is potentially coming from this partnership? You can be totally candid. Paresh Dave: So my guess is something the size of a small calculator, like the ones that people used to fit into their shirt pocket or their patch pocket. Because I think Sam Altman has said he wants something sort of portable. But it'll be something that has a kickstand on the back, so you can lay it on the table and just have it sit there and kind of be around you wherever you are, whether it's the dining table or the kitchen counter or your desk at the office. It'll annoyingly require charging, some dumb port that makes no sense, and Europe will complain. It won't have a display in the first version, but eventually will because there's no other way to make money from ads. And I think if you're trying to benefit humanity and bring AI to everyone, you're going to both have to have a free ad-supported model and a subscription model. So yeah, that's my take. Lauren Goode: I'm just grinning from ear to ear because this is so hyperspecific. Is this because you know these details or is this because you just so firmly believe that this is what this device has to be? Paresh Dave: I mean, I wish I knew what was going on for sure. No, I think there are only so many form factors, which is sort of the jargony term for the shape and size of a device. There's only so much you can do. Maybe this is like a throwback to the iPod Nano or which one didn't have the screen? Shuffle? Lauren Goode: The Shuffle was the tiny one. Paresh Dave: Yeah, maybe something that looks like that. Then Jony doesn't even have to reinvent the wheel. He can get his old drawings back and he's halfway there. Lauren Goode: Just make the iPod wheel. But for the AI era. Okay. I think Kate's take is a little bit different. Kate, what's your hot take on this Jony and Sam mashup? Kate Knibbs: Okay, so it seems like they probably had a lot of fun shooting that gorgeous commercial about their friendship in San Francisco that they put out for this product that has no name. And I'm very happy for them and I think that the cinematography in that commercial was beautiful. I'm trying to think of positive things. Yeah, the tool, I'm so confused. So we've heard that it's not a phone, it's some sort of desktop device. I'm running up against the limits of my imagination into what it could look like and actually not be stupid. I think it's going to be beautiful looking. I do, if Jony Ive's involved, I think it'll be sleek. Will I want it? I just can't imagine it. But I'm excited to see what it is and if it is, I am getting really strong George R.R. Martin, the winds of winter is coming vibes from this whole project, to be honest. I'll believe it when I see it. Lauren Goode: Thank you all for listening to Uncanny Valley and thank you so much for sending us your questions. Keep emailing us. We do read your emails and we love your feedback. If you like what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you'd like to get in touch with us, any comments, questions, show suggestions, you know the email, it's uncannyvalley@ Today's show was produced by Adriana Tapia and Jordan Bell. Amar Lal from Macrosound mixed this episode. Matt Giles fact-checked this episode. Kate Osborn is our executive producer. Katie Drummond is WIRED's global editorial director. And Chris Bannon is Conde Nast's head of global audio.

Disney Just Threw a Punch in a Major AI Fight
Disney Just Threw a Punch in a Major AI Fight

WIRED

time26-06-2025

  • Entertainment
  • WIRED

Disney Just Threw a Punch in a Major AI Fight

The Disney+ logo is displayed during San Diego Comic-Con International at the San Diego Convention Center in San Diego, California, on July 24, 2024. PPhoto-Illustration: WIRED Staff; Photograph:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Disney and Universal recently filed suit against Midjourney, an AI image generation startup, alleging the company has become a 'bottomless pit of plagiarism,' and that it freely reproduces the studio's copyrighted content, including their most iconic characters. We dive into the details of this case and others, and explain how this conflict gives us a window into the growing tensions between AI companies, publishers, and creators. Mentioned in this episode: 'Wall-E With a Gun': Midjourney Generates Videos of Disney Characters Amid Massive Copyright Lawsuit by Kate Knibbs and Reece Rogers Disney and Universal Sue AI Company Midjourney for Copyright Infringement by Kate Knibbs You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, Katie Drummond on Bluesky at @katie-drummond, and Kate Knibbs on Bluesky at @knibbs. @Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: How is everybody in the room doing today? Katie Drummond: Mike. Michael Calore: Katie. Katie Drummond: I am awash with physical and psychological stress. Michael Calore: Oh no. Katie Drummond: I don't know if anyone knows this who doesn't live in New York. It is very hot here this week. We are in a heat dome. It's very hot, it's very humid, so I'm physically stressed. It's also the last week of school. I have a child finishing school this week, and what that means basically, if you're a parent, you understand, the last three weeks of school are just totally fake and every day there's a party, every day there's a request that parents bring in snacks. Every other day is a half day. Today was color day. Wear all your favorite colors. Also water day, bring your swimsuit. We're going to play with water. And it's like, what? I can't keep track of this. So anyway, I'm stressed for those little micro reasons, and then obviously there's the bigger existential planetary stress happening that I don't think I need to get too much into, but I have been watching a lot of CNN. Michael Calore: Speaking of checking the news. I played a wedding this weekend. I'm in a band and we often get hired for weddings, and somewhere between the ceremony and the reception, the United States decided to involve itself in a war in the Mideast. I was like, okay, this sucks for the people who are getting married today, but also, we have to throw a party. So 20 minutes later, I was on stage playing cover songs, and everybody was getting drunk and dancing. Katie Drummond: Did the wedding guests react? Was it a topic of conversation at the wedding or were they totally disconnected from the news? Michael Calore: I think they were maybe not totally disconnected from the news, but they were there to party, so a party commenced uninterrupted. Katie Drummond: Well, I'm happy for them. Sometimes you just have to have a party. Michael Calore: This is WIRED's Uncanny Valley , a show about the people, power, and influence of Silicon Valley. Today, we're talking about the recent lawsuit filed by Disney and Universal against the AI image generation startup, Midjourney. The lawsuit alleges that Midjourney has become a, quote, "Bottomless pit of plagiarism," unquote, and that it freely reproduces the studio's caught copyrighted content, including their most iconic characters. It's the first time two major Hollywood studios have taken legal action against an AI company, and the lawsuit has the potential to shape how intellectual property is treated in the AI era. We'll dive into why the studios think Midjourney is infringing on their content, and how this conflict gives us a window into the growing tensions between AI companies, publishers, and creators. I'm Michael Calore, director of Consumer Tech and Culture here at WIRED. Lauren Goode: I'm Lauren Goode. I'm a senior correspondent at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's global editorial director. Michael Calore: So before we start today, our listeners should know that we have a special guest with us, the inevitable Kate Knibbs, senior writer at WIRED, who has been closely covering the Disney Universal lawsuit and will be able to tell us all about it. Welcome to Uncanny Valley , Kate Knibbs. Kate Knibbs: Hi, guys. Thanks for having me. Lauren Goode: Knibbs. Katie Drummond: Thank you for being here. Lauren Goode: Extreme Knibbs is on the show. Kate Knibbs: Well, last time I was here, this podcast had a different name and way less Katie, so this is very exciting. Lauren Goode: We've just been beta testing this for the day that we could bring you on, Knibbs. Michael Calore: Okay, well, let's start our conversation with Midjourney. Kate, can you tell us what this company is about? And also, I'm curious if any of us here have used it. Kate Knibbs: Midjourney is a pretty small player in the generative AI space, small potatoes compared to something like OpenAI. Its main tool that it offers consumers is actually quite similar though to OpenAI's DALL·E or Stability AI's Stable Diffusion product. So it's a tool where you put in a prompt and it creates an image using generative AI magic based on that prompt. For a long time, the way that you did this, this is an example of how small Midjourney was. You had to go on a Discord and then type your prompt into the Discord chat, and then that's how you would get your image. There wasn't even a dedicated web interface until 2024. So now it's just launched a video option, so it's definitely expanding and continuing to grow, but I'd say a secondary or tertiary player on the scene. Michael Calore: So why are Disney and Universal, these two Hollywood Juggernauts, suing Midjourney? Kate Knibbs: Well, they allege that Midjourney is infringing upon their intellectual property. If you've ever played around with the app, now it's just a web interface, you put in whatever you want and it spits out whatever you want. There's not the most sophisticated guardrails going on. So if you wanted to make an image that showed, for example, Homer Simpson or Wally or Darth Vader or the Boss Baby, which is one of Universal's most beloved cartoons apparently. Katie Drummond: I love Boss Baby. Kate Knibbs: Well, you can do it pretty easily and there have been a lot of people compiling different examples of how easy it is to create images that feature Disney and Universal and other popular characters using tools like Midjourney. I don't know what the conversations were like within Disney and Universal's offices, but apparently they had enough. They're late to the AI copyright lawsuit game here. There's already been dozens and dozens filed, including some against Midjourney, but they came in with a splash because their complaint against Midjourney features hundreds of images of their very, very recognizable characters that were spun up by Midjourney's AI tools. Katie Drummond: Who here has MidJourney? Is this a tool that you guys use, Mike and Lauren? Kate, I assume that you've played around with it quite a bit. Lauren Goode: I think I used it once, and having to go to the Discord server to type in the prompt was a total barrier, and I was like, I'm never using this again. Yeah. Katie Drummond: Poor user experience. Lauren Goode: Haven't tried it since, but sure. Yeah. How about you, Mike? Michael Calore: I've used it a couple of times, mostly just asking it to make cats surfing or cats making pancakes, just because I was put on the spot, but yeah, a friend of mine has access and showed it to me and I was able to type in some prompts. This was when it was just an image generator, so it wasn't doing video, but from what I understand now, you generate an image and then you just click animate, and then it just turns it into an animation. Lauren Goode: Wild. Michael Calore: Yeah. Lauren Goode: Kate, how much were you using it before you reported out the story? Kate Knibbs: I started playing around with a lot of the image generators back in 2023 when I started writing more about AI, just so I understood how they worked. And I don't know, I'm just not ever super keen to spend my time generating AI images. I'm definitely more of an LLM girl if you're going to get into what GenAI tools we use, so I think I used it basically just for testing. I haven't been incorporating it into my personal creative practice. Katie Drummond: I love the phrase personal creative practice. That is amazing. We all have one. Lauren Goode: Katie, have you used it? Katie Drummond: No. Well, I haven't, but I use it vicariously because I'm married to someone who loves generative AI, uses every possible generative AI tool at all times of day and night, and has used Midjourney a lot and is constantly generating new images as part of his personal creative practice. There's a lot of Midjourney activity in my household. I just cannot say that I personally am spending my time generating images with generative AI. No, Lauren Goode: Fair enough. I think we are just cruising towards an episode, Katie, where your spouse joins us on this show. Katie Drummond: Oh my God, we would never leave the studio. We would be in here for seven, eight hours. I'm married to a very talkative person. Lauren Goode: It would be like an acquired podcast, but Uncanny Valley . All right, so back to this lawsuit. Kate, something that caught my eye in your reporting is that Midjourney has been really, really open about the fact that it's just scraping the internet to train its model to create this data set of images. This is something that we know all generative AI startups do, but Midjourney CEO has been really candid about it and said a few years ago, there just isn't really a way to get a hundred million images and know where they're coming from. He said, quote-unquote, "It would be cool if images had metadata embedded in them about the copyright owner," or something. But that's not a thing. There's not a registry, and a lot of startups, rather than waiting for that technology to catch up or for there to be guardrails in place, it just moved full speed ahead, and openly was ripping Disney content from the internet. Is that what we're getting from this? Kate Knibbs: That is definitely what he said, and I do want to emphasize, at the time that these tools were being created, even a year or two ago, that thinking was almost universal in Silicon Valley. There wasn't that much debate about whether scraping the internet was kosher to create this kind of tool. It was just assumed that if you could scrape something off the internet, then it was fair game. And at the time that he made those comments, I think it was 2022, we were in a very different space. Since then, there has been this whole boom lit attached to the rise of generative AI where there's all of these different companies that are basically, I don't think they have metadata embedded in images, but there's licensing marketplaces, there's all of these different licensing startups. A lot of different companies are making licensing agreements. Lauren Goode: There was also a coalition I think spearheaded by Adobe, and a bunch of tech companies started to get involved around watermarking images specifically. Is that similar? Kate Knibbs: Yeah. Yeah. So that's exactly what I'm talking about. It just took off. People realized that there was a big market opportunity here to create licensing agreements and to pay for content used as training data. So at the time, I think he probably said that in good faith, but I think right now to say that would sound kind of ridiculous because you'd have to be ignoring all of these different initiatives and programs and startups that have cropped up, and all of the dozens of lawsuits that have cropped up saying, actually, no. It's a copyright violation to train your models this way. Katie Drummond: I will say it is unfortunate for him and his company that he said stuff like that on the record, when fast-forward to 2025, you have Disney, which is notoriously tough when it comes to copyright, actually leading this effort against Midjourney. And from what we know, Disney's top lawyer, Horacio Gutierrez, actually led this legal fight by basically asking his peers across Hollywood to join in. So asked a bunch of these different companies. Eventually, Comcast Corporation, which owns NBC Universal and Universal Studios, agreed to participate. So the fact that Disney is really taking the lead here, I would say I'm not a lawyer, but Kate, is your understanding that that is a particularly stressful place to find yourself if you are Midjourney, and all of a sudden, Disney, specifically Disney is coming out swinging, Kate Knibbs: It's literally the worst possible thing that could happen. Katie Drummond: Oh. Ouch. Kate Knibbs: Disney is the final boss of copyright litigation, and if there's a second final boss, so Midjourney has found itself in a real pickle. This isn't the first time Midjourney is being sued actually for copyright infringement. There's a group of visual artists who filed a lawsuit several years ago at this point, but this is a massive escalation. This is going from Pee-Wee Baseball to World Series type of thing. Katie Drummond: Oh, I have goosebumps. Michael Calore: So the lawsuit is filed at the beginning of June, and then within just a couple of weeks, Midjourney comes out with an update to its generative AI slate of tools. They release a video generation tool called V1. Now Kate, last week, you had an exclusive story for WIRED about V1, and you noted that it doubles down on this pattern of displaying Disney and Universal copyrighted imagery. So what did you find out when you tested it? Kate Knibbs: Yeah, I will say this about Midjourney. They are bold. They're bold company, because Reese and I, our colleague, tested it and we were still able to generate a number of different videos that prominently featured Disney and Universal characters. Not only that, but some of the videos that we were very quickly able to generate featured Disney characters doing un-Disney-like things like Wally, the lovable robot from the titular film, Wally, holding a gun, and then Yoda smoking a joint. And I did get a lot of feedback that it seems like Yoda's High anyways so maybe that's actually Canon, but still. Lauren Goode: These are just mascots for the ages. What is more representative of this time than Wally holding a gun? Kate Knibbs: Yeah. But yeah, so they did appear to have some guardrails. There were certain characters, I think Darth Vader was one of them, where we had to misspell the name, so there was at least a slight effort that appeared to be taken to stop people, but it was still incredibly easy to generate the videos. I don't know where that leaves Midjourney. It doesn't seem someplace great. Michael Calore: So that's interesting. So to get some characters to appear, obviously some of the names of the characters were blocked, but to get some characters to appear, you could just misspell their name and it would pass the filters and generate something that was exactly what you were typing? Kate Knibbs: Yes, unmistakably too. It wasn't a super off-brand Shrek or Darth Vader. It was the character that we know. Katie Drummond: Did you test Boss Baby? Kate Knibbs: That might be our big mistake. Katie Drummond: There's your next follow up. Kate Knibbs: I should do a follow-up just trying to see how much messed up stuff I can make Boss Baby do. Katie Drummond: My boss made me generate Boss Baby. Kate Knibbs: I like Boss Baby. I think those movies are hilarious. It's Alec Baldwin's Best Work. Well- Michael Calore: So there are dozens of lawsuits against AI companies for copyright infringement right now, and the fact that these models are trained on previously existing work has always been a controversial feature. So let's talk about the impacts that Disney and Universal specifically will have in this fight now that they've joined forces against generative AI. What makes this case different from the rest of the lawsuits that are still out there? Kate Knibbs: Well, as we were talking about earlier, just the fact that Disney is doing this alone is enough to make everyone take notice. This is the company that you don't want suing you for copyright infringement. Apart from that, and this ties into the fact that this isn't the company that you want suing you for copyright infringement, all of the copyright experts that I've spoken to have really emphasized how well-argued the complaint is, including some copyright experts who generally are very pro-fair use and aren't universally on the side of the plaintiffs. They've really been like, "Wow, I think this is going to be a struggle for Midjourney to make the fair use argument." One of the experts, Matthew Sag, who is a professor of internet law at Emory University, I believe, he's not a guy who thinks that all of these cases are going to turn out well for the plaintiffs. He tends to be pretty skeptical of these cases, and he even said, "I don't know how Midjourney is going to approach this because I just don't think that juries are going to buy that thousands of images of Darth Vader aren't copyright infringement." So the fact that Disney came with many receipts I think is additional cause for concern for Midjourney, and like all of the companies that aren't being sued right now but could possibly be next. Lauren Goode: Well, speaking of Darth Vader, Disney recently licensed the use of Darth Vader's voice for a chatbot in Fortnite. Everybody knows what Fortnite is, right? And what's interesting about this is this deal ended up causing some backlash from the actors in SAG-AFTRA, which is the union that represents actors and voice actors and stuff like that, where they were arguing that this is an example of AI-generated voices replacing their work, their very human work, without authorization, and Disney has also reportedly been in conversations with companies like OpenAI about potential partnerships. So what's interesting about this lawsuit is it's Disney pushing back against Midjourney for what it feels is unfair and illegal use of its intellectual property, but Disney itself is taking these steps to embrace AI in Hollywood. Katie Drummond: Well, and I think all of these lawsuits and this lawsuit, and then just even these deals that Disney is doing, these licensing deals, all of it combined I think just highlights how little regulation or legislation exists around how AI can and cannot be used when it comes to copyrighted work and IP. And so you're basically asking courts to make their own judgments on these cases, but there's no framework really to underpin exactly what they are deciding on or what those decisions are based on. Kate, is that your understanding of where we are in terms of any sort of actual regulation around this? Kate Knibbs: Yeah, it's definitely I think going to be something that the courts are going to be left to decide, especially because the Big Beautiful Bill's ten year moratorium on AI legislation at the state level seems to be going. And I will say, a lot of people on both sides don't seem that upset about this. I think there's a sense that it might be really difficult to regulate this properly without stifling innovation. We're not seeing a huge clamoring for federal law to provide the guard rails here. Basically, it's going to be decided by the courts. Probably the first case that goes will go all the way up to SCOTUS. Michael Calore: Wow. I hope that Yoda smoking a joint is on the official record at the US Supreme Court. All right. Well, let's take a quick break and we'll come right back. Welcome back to Uncanny Valley . So we've been talking about the recent lawsuit filed by Disney and Universal against Midjourney, but the AI copyright battle isn't just happening in Hollywood, it's happening everywhere. And our guest today, Kate Knibbs, created a tracking page where WIRED readers can see how AI copyright lawsuits are unfolding across many industries. Kate, can you tell us about some of the main lawsuits that have caught your eye, and which industries seem to be the most active in suing AI companies? Kate Knibbs: So there was a lot of movement initially with book authors suing AI companies and then now media outlets suing them, visual artists, and more recently, there have been some music labels. The Disney case is also notable because it's the first time Hollywood Studios jumped in. They weren't really in this world yet. The media companies, I would say the biggest player there is the New York Times, which is suing OpenAI. The lawsuit that it filed, I heard similar things about that one that I did about the Disney one, which is that it's just exceptionally well argued and it had a lot of emphasis on showing outputs of the LLMs that were identical to New York Times articles. So that one is definitely a major one to watch, but there have been some movement in a few others. Kadri versus Meta, which is Richard Kadri is a novelist, a group of novelists sued Meta. That one's been really spicy because it came out in discovery that Meta had pirated a lot of the books that it ended up training on and it openly admits that it did this, and the judge who's sitting on that case is just a character, so I really like watching the hearings for entertainment value. If anyone out there is as big of a nerd as I am, you can live stream them, and I recommend it. That might be one of the first to either go to trial or go to summary judgment. And then Barts verse Anthropic is another novelist. That one is also progressing quickly and the judge sitting is really well known being well-versed in fair use, so something might happen there. And then Suno and Udio, that are two AI song generators, they got sued by the major music labels, more recently, but there have been talks happening already between the labels and the song generators, settlement talks. So if those settle, that will be a huge deal because we've been looking for what's going to go to trial and what's going to settle. I could go on, but I think those are the ones that I think you guys should know about. Michael Calore: So publishing is definitely at the top of the list of industries that have been worried about AI plagiarizing original work, and we should all know because we're all in the publishing industry. But then there's the content that is the opposite of thoughtful, human-made work, and that is AI slop. The term explains itself when you say it out loud, but let's quickly talk about what AI slop is and why it seems to be everywhere. Lauren Goode: I can take this one, but also, I do want to toss it back to Kate, because Kate, you are the queen of AI slop, and I don't mean that you generate it. I don't mean that it's part of your personal content creation vector or whatever we're calling it, but you've written a lot about it. AI slop is just low-quality, shoddy AI content that is appearing online. It is proliferating our feeds. It's often on social media, but it's not just on social media. It is now being passed off as legitimate, quote-unquote, "journalism". For example, last month, the Chicago Sun-Times and the Philadelphia Inquirer had both published these special sections recommending summer reading lists, and the list included a bunch of made-up books by real authors, and these names and titles were just thrown together at random. Slop isn't just made-up stuff though. I think it's got a certain aesthetic. It's part of this growing trend of the enshittification of the internet, which of course Cory Doctorow wrote about for a few years ago and now I'ts just the term we use. It feels like spam, and sometimes it's easily recognizable and sometimes it's just not. Katie Drummond: So you mean the videos I see on TikTok of Donald Trump and Jesus Christ walking on the beach are not real? Lauren Goode: No, those are real. Katie Drummond: Oh, okay. That's happens. Lauren Goode: Those really happened. Katie Drummond: Oh, okay. Because I've been faving all of them, because I want to see more. So those are AI. Got it. Okay. Lauren Goode: Yes, exactly. Same with JD Vance breakdancing with Pope Leo, those are real. Katie Drummond: Oh, I have... Yes, of course. Lauren Goode: Yeah. Hasn't killed him yet. Michael Calore: A lot of these examples are funny or fun, but then there are ones that are more serious. There was AI slop coming out of current events in the Mideast recently, right? Katie Drummond: Oh, of course. Yeah. Michael Calore: And politicians and world leaders will retweet these things, even knowing that they're fake, just because it appeals to their sensibility and it helps them spread the message they want to spread. Katie Drummond: Oh, I make jokes when I'm stressed out and uncomfortable, and I would say it is incredibly uncomfortable and stressful. I think you would all agree with me being a journalist right now. Try being the editor in chief, let me tell you. And actually watching AI slop proliferate across the internet, across all these platforms, sometimes be mistaken for factual information by consumers at the same time as we are in this very existential moment for news and media. Yet again, we are in an existential moment for news and media, in many ways because of AI, because of the way Google is changing their search, because of other ways that AI is changing how people access information. Publishers once again are essentially in the crosshairs of all of that, and to add insult to injury, you then open TikTok and Jesus and Donald Trump are fishing, and it's just like it's everywhere. It's like it's surrounding you if you are a journalist because you were experiencing the slop itself. You're seeing what it's doing to the information landscape online, and then you're banging your head against a brick wall because Google did this, that or the other thing with AI overviews, and all of a sudden I'm inventing numbers. I genuinely am inventing numbers, but all of a sudden, your search traffic is down 50%, and that has existential ramifications for publishers. There's also this weird thing happening that has caught my attention, and Kate, you've reported on this, which is where AI generated content is actually like a feature for some websites and actually works really well for them. So WIRED found that over 54% of longer English language posts on LinkedIn, everybody's favorite social network, are likely AI generated. Now, LinkedIn have said that they monitor posts to identify low quality and repetitive content, but AI is probably really good at LinkedIn because generic, bland writing is kind of what LinkedIn thrives on. I think that that's interesting. It's not necessarily a good thing, but it's just another indication of how pervasive generative AI has become online. Michael Calore: Yeah, and it's particularly difficult when it's become pervasive on the places that we used to rely on for accurate information or the places that we use for research. Not a lot of people are used to going onto Facebook to find out accurate information, but Facebook used to be the place that you went for news. Now, if you were still relying on Meta programs to find news, then you're not going to find as much accurate news as you used to. You may not notice that erosion happening, but it's happening. I think the bigger example is probably Google, because everybody is used to going to Google, typing something in and getting an answer that they can trust. And now with AI overviews and with AI mode and all of the different interfaces that Google is just doubling down on with putting AI tools in to generate these answers, you cannot guarantee that you're going to get accurate answers anymore. So if you're a person who is used to using these tried and trusted tools, what are you supposed to do now? Katie Drummond: I would add too, on the Google piece, not only are you potentially not getting accurate information. If you search something on Google and you get an AI overview, it is entirely possible that it is just completely inaccurate. It's also entirely possible that that AI overview was generated using journalism from publishers who rely on audiences coming directly to them, to their links to generate revenue, and they have then just missed the opportunity to make a little bit of money and bring someone into their publication because Google has basically sucked the information out and is now just providing it in a tidy list of bullet points at the top of the page. So two very bad scenarios there, depending. The information might be accurate and it was pulled from and we got nothing for it, or it might just be total garbage. Kate Knibbs: Mm-hmm. Yeah. Michael Calore: All right. Well, in order to not be all doom and gloom, I want to offer a little bit of advice to our listeners about what they can do if they are listening to this and feeling concerned. How do you get good information on the internet in the world of AI Slob? Lauren Goode: Subscribe to but truly, you should. Kate Knibbs: That's the big takeaway, subscribe to WIRED. Lauren Goode: Yeah. Kate Knibbs: I think it's really important that we all treat... I'm not saying never use an AI search product, okay? Because, I don't know, you're not going to listen to me about it and you're probably going to do it considering my husband absolutely loves asking ChatGPT for life advice about everything. Lauren Goode: Oh my God, mine does that too. Kate Knibbs: I'm going to tell him not to. Lauren Goode: Yeah, yeah. I'm just kidding. I don't have a husband, but I just really wanted to join the dialogue about it. Katie Drummond: This is AI slop, Lauren. Lauren Goode: Yeah, exactly. He just goes to a different school. Kate Knibbs: But you have to use it with the knowledge that it's a starting point. You have to fact check what it's spitting back at you. Okay. So I bought my real husband a snake for Father's Day, because I hate myself, but he really wanted one, and he has been using chat GPT to figure out how to take care of it, because I didn't look into how high maintenance they are and it turns out they eat rats. It's a whole thing. And most of the info actually has been accurate, but I've been like, "You got to double check, because what this is telling you." I was like, "Are you sure about the rat?" And it was true. But my point is, people, you need to fact check whatever the AI search engines are telling you. Don't just feed a snake or rat because that's what it says. Go to the library and look up a book. Katie Drummond: What a visceral, shocking example you provided to back up your assertion. Wow. Okay. Lauren Goode: I like how you didn't even say do some legitimate internet searching. You just went straight to, "I need you to go to the card catalog and I need you to dig through there and find the old Britannica on how to take care of a snake." I love this. My other recommendation, get off Facebook. Just get off the big blue main Facebook. Really, folks? Are we really still on there? Have you seen it lately? Michael Calore: Yeah. Kate Knibbs: That's where I got my snake. Lauren Goode: Are you serious? They need to spin off Marketplace. They need to make that a separate... They need to do an app split. I also am on Marketplace. Despite that, everyone should get off Facebook because of the incredible amount of AI slop that's on there. Michael Calore: And it's easy to say just fact check whatever you're searching for and check the sources, but so many people are just not doing that. And it's hard because it seems as though every search engine option is incorporating AI now. I remember when AI Overviews first came out and people were like, "I don't like this. What should I do?" I would recommend that people just use a different search engine like Brave or DuckDuckGo. But now both of those search engines have incorporated their own version of AI Overviews into the top of every page of search results, and I think that the only search engine that I've found that doesn't do that is Ecosia. E-C-O-S-I-A. Lauren Goode: It sounds like a cleaning product. Michael Calore: They call it Ecosia because it's good for the environment. They don't boil the oceans to power it. They offset their carbon emissions by planting trees. But that's the only search engine that I've found that it feels like a real search engine that actually gives you good results and does not put an AI overview at the top, so that's one step. Katie Drummond: I feel very firmly that people just need to go directly to the source, period, the end. When it comes to information or entertainment, I just think identify which sources you trust, pick a handful. If it's WIRED, we're grateful and that's fantastic. If it's the New York Times or the FT or the Wall Street Journal, whatever it is, and just go spend time with them directly. The less time I think people spend with intermediaries, whether it's Facebook or Google or ChatGPT or X or whatever else, the better. This is the internet as we now know it and it will be what the internet is in the future, and people need to start getting used to just going directly to the source of information or entertainment that they choose to spend time with. Lauren Goode: When I think about it philosophically, I think a lot about the fact that, going back to the history of the social internet, the consumer internet, there was this idea that we were offline and then you went online and that was like an alternate life and people literally did create alternate identities or they were living in second life, but there was this idea that that wasn't real life. And then I think sometime around the 2010s or so, we would constantly make the argument being on Twitter, "No, this kind of is real life, or at least it's a reflection of real life." And now it feels like the pendulum has swung back again where we're always online but a lot of it is not real life, and that is what it is starting to feel like because of things like V1 or Google Veo 3. Some of those videos are incredibly realistic, and sure, that's really cool for our personal creator Vector things that we've been talking about throughout this podcast, as Kate said, but it's also pretty terrifying. Michael Calore: Yeah. Katie Drummond: It's a personal creative practice. Lauren Goode: Thank you very much. Personal creative practice. Michael Calore: We'll share our prompts on LinkedIn. Find us there. Okay, let's take another break, and then we'll come right back. Thank you all for a great conversation today. We are going to shift gears now into our personal recommendations for our listeners, Kate, as our guest in the hot seat, you get to go first. What's your recommendation? Kate Knibbs: Well, building off what Lauren just said about Second Life, and by the way, I want to read an entire essay from you about that so I think you should pitch one. This book called Second Life by Amanda Hess, who's a New York Times writer. Second Life, it's about having a child in the digital age, and it's of great interest to me because one of my pet obsessions is that it should be illegal to put children on the internet. So I picked up this book not really knowing what to expect. It's much more about being a parent in the internet age. It's really well-written and interesting, and I loved reading a physical copy of it without any screens around me. Michael Calore: Nice. Lauren Goode: Love that. Michael Calore: Who wants to go next? Lauren Goode: I will go next. It is also book related. So I was in Texas, and in the hotel that I was staying in, there was a copy of American Short Fiction. It's a non-profit literary organization based in Austin, Texas where I was, that puts out these regular short fiction compilations. Listener, I have something to share, which is that I stole it from the hotel. It was just in the room and I started reading it and I didn't finish all of the short stories, and I had to. I had to because that is the art of the short story, of the American short story. And so I took it with me. I'm probably going to get charged for it. I will pay it, it's fine. But you don't necessarily have to read this literary journal, American Short Fiction, but I recommend diving into some short stories if you need a break from the news and you're not ready to tackle your next book, Michael Calore: I'm not going to judge you for taking it. When I stay in Airbnbs, every once in a while, I will pilfer a book. Katie Drummond: This is outrageous. Michael Calore: But then I try to make good by when I'm staying somewhere and I finish a book, I put it on the shelf Lauren Goode: Also. It's not like I'm taking it and then passing it off as my own writing by scrambling a bunch of the characters around and spitting it back out. I'm not saying people steal like that, but sometimes they might. Michael Calore: Sometimes they might. Lauren Goode: Yeah. Katie Drummond: My books stay with me and other people's books stay with them. That's how I run my book life, but I do have a recommendation. Michael Calore: Okay. Lauren Goode: Tell us. Katie Drummond: And I think this actually is interesting because I thought of this before we recorded, and then we just spent all this time talking about GenAI. And it maybe what is indicative of our complicated relationship with these tools, my recommendation is actually about ChatGPT, which is if you've ever wanted an interior designer but didn't want to spend money on one, sorry to all working in that profession. This weekend, I went into a little tizzy where I wanted to get new furniture for our living room, and you know I'm married to a chronic generative AI user and abuser, and I said to him, "Can you take photos of our living room and upload them to ChatGPT, and upload the URLs to the furniture stores that we like and where we typically buy furniture and tell it that we have this many dogs and this many kids and all the salient details about our life, and have it redesign our living room and then generate images of the living room with this new furniture? And then can you get a list of links to all the different items of furniture that are in the redesigns?" It was very, very cool. We got all these different designs for our living room with lists of furniture. It was very cool. Lauren Goode: That is very cool. Katie Drummond: So if you are thinking about buying new furniture or moving stuff around in your house or redesigning something, ChatGPT can help, and then you can find an interior designer and you can also pay them, because God forbid I recommend putting anyone out of work. I don't want to do that, but it was a very cool exercise. Kate Knibbs: I would read, I let chat GPT redesign my living room too. Katie Drummond: And then I could expense. I could expense all my new furniture to WIRED's budget. This is a very good idea. This is a very good idea. Kate Knibbs: We should all actually just pick a room. It could be like a themed issue. Katie Drummond: Yeah, it could be a whole group project. Michael Calore: I'm also going to recommend a book. It's called The Argonauts, by Maggie Nelson. This is a book that came out, oh, I don't know, nine years ago, 10 years ago, and I bought it used a couple of years ago and it sat on my shelf. It's a slim book, does not look like it's a chore to read, but I just never really got around to it. And then I picked it up on a whim a couple of weeks ago and I blew through it in about three or four days, and it is phenomenal. It's a memoir. It is about her life with her partner who is a gender-fluid person. They have a child together, and it is also a mix of philosophy and history and family history. It's just this beautiful stew of writing from a very personal and almost academic distance, mixed in with the personal aspects of it. It's a weird book, difficult to describe, as you can imagine, considering that I'm having a difficult time describing it, but I would say that it's a memoir more than anything, and it's really touching and just a really beautiful book and just phenomenal writing. I also really love the way that it swirls to a conclusion, Katie Drummond: Swirls to an ending. Michael Calore: Yeah. It really does. Katie Drummond: Just like we're about to. Michael Calore: Just like we did. Well, thank you all for being here. This was a great discussion. Good show. Lauren Goode: This was really fun. Kate Knibbs: Good job, guys. Happy to be back. Katie Drummond: Good job team. Michael Calore: Thanks for listening to Uncanny Valley . If you like what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, you can write to us at uncannyvalley@ Today's show was produced by Adriana Tapia and Kyana Moghadam. Amar Lal at Macrosound mixed this episode. James Yost was our New York studio engineer. Shireen Mohyi fact-checked this episode. Jordan Bell is our executive producer. Katie Drummond is WIRED's global editorial director, and Chris Bannon is Conde Nast's, head of Global Audio.

Why Gutting USAID Will Hurt America
Why Gutting USAID Will Hurt America

Yahoo

time26-03-2025

  • Entertainment
  • Yahoo

Why Gutting USAID Will Hurt America

WIRED Senior Writer Kate Knibbs explains how the Trump administration's self-proclaimed "America First" policies are, in practice, anything but—particularly their effective destruction of the United States Agency for International Development (USAID). See why shuttering the operations of USAID will negatively impact the everyday lives of Americans. Director: Lisandro Perez-Rey Director of Photography: Constantine Economides Editor: Matthew Colby Host: Kate Knibbs Line Producer: Joseph Buscemi Associate Producer: Brandon White Production Manager: Peter Brunette Production Coordinator: Rhyan Lark Camera Operator: Chris Eustache Sound Mixer: Rebecca O'Neill Production Assistant: Caleb Clark Researcher: Paul Gulyas Post Production Supervisor: Christian Olguin Post Production Coordinator: Rachel Kim Supervising Editor: Eduardo Araujo Assistant Editor: Billy Ward - 1 billion children immunized. 2.2 billion malaria cases prevented 26 million lives saved from AIDS. For 60 years, USAID, America's Foreign Assistance Agency has fed the hungry and prevented disease worldwide. 40% of total global humanitarian aid came from the US with two thirds of that coming from USAID. Recently, president Trump and Elon Musk all but gutted the agency as part of their so-called America First policies. But in our globalized economy, the data suggests that slashing upwards of 90% of USAID's budget or $60 billion may have unexpected blowback here in the US, contributing to rising food prices, making us susceptible to epidemics at home and damaging the livelihoods of many Americans, especially farmers in the heartland. Wired pinpointed some critical food and medical cuts to USAID and placed their impact on the health and wealth of everyday Americans on the grid. This was the headquarters of the USAID offices in DC. Google Maps list the office as permanently closed. Only a few hundred out of the agency's 13,000 employees will retain their jobs, but when they were in business, USAID wasn't spending American taxpayer funds only on foreigners as many claim. 80% of the companies that had contracts with the agency were American. USAID was investing money back into the pockets of as many as 50,000 American contractors employed by universities, nonprofits, and aid organizations. This map will give you an idea of just how many American institutions were partnering with USAID before the cuts. From Columbia University's research center, helping countries plan for climate change to a partnership with the Coca-Cola company, improving access to water and sanitation services, USAID money funded American universities, businesses and charities to the tune of $28 billion annually, only 0.7% of the total federal budget that fed a multitude of programs aimed at promoting global health, democracy, and other foreign policy aims of the US because so much was affected. We can't cover it all in a single video, so we'll be focusing on the two most critical aspects of USAID, Food and Health. A huge USAID program, Food for Peace, bought surplus food from American farmers and delivered it to countries in need, benefiting over 4 billion hungry people worldwide since 1954. So when malnourished children received much needed breakfasts in Afghanistan or families and famine prone Sudan lined up for a meal from an emergency kitchen. 41% of that food was sourced directly from American farmers who were paid about $2 billion annually, $2 billion bought everything from Iowa Soybean oil to Oklahoma Wheat, Kansas lentils produce from Virginia and Georgia Peanut products including a nutritional pace that's a powerhouse for Friday hunger. The products were then sent around the globe by USAID and its contractors, but by mid-February, just days after the cuts began, $489 million worth of food assistance and over 500,000 metric tons. American grown food already paid for by USAID remained stranded in ports or in transit with a significant portion at risk of spoiling. In Larned, Kansas, the Pawnee County Cooperative Association reportedly had 1.5 million bushels of sorghum, a key grain in cereals in storage, with no one to buy it. Port Houston had 235,000 tons of wheat stranded in warehouses according to local sources, as well as 30,000 metric tons of cornmeal, pinto beans, lentils, rice, and vegetable oil. Similar issues reportedly impacted ports in Boston, Miami, Norfolk, Savannah, New York, Chicago, and Lake Charles. At Ports in Kenya, nearly 200 million in emergency food aid remained undelivered, contractors and local USAID staff responsible for getting the food which had already been bought and paid for. From the Kenyan ports to the South Sudanese people did not receive payment due to the USAID pause. In South Sudan, about 7.1 million people, more than half the population require food assistance with 1.6 million children at risk of acute malnutrition. South Sudan lacks basic infrastructure like paved roads, making aid delivery expensive and inefficient. For example, PGE is a remote area in South Sudan that humanitarian workers can only access by taking a two hour flight, followed by a four hour canoe journey, then a six hour trek through a swamp. As a result, those suffering will not receive this food. The food will go to waste or possibly be stolen from the docks. According to the World Health Organization, which the Trump administration tries to discredit. At any given moment, 733 million people are experiencing food insecurity somewhere in the world. To combat hunger back in the US, scientists are studying things like the resilience of specific crops like wheat and peanuts. 17 food science research labs housed at various American universities, including Kansas State University, the University of Nebraska and Purdue University received millions of dollars from USAID's Feed the Future Program. But due to the cuts they've had to pause their research. The Soybean Innovation Lab at the University of Illinois has laid off 30 employees and expects to close down if funding isn't restored. They worked with farmers in Madagascar and Nigeria, as well as Pakistan, India, and Indonesia to breed soybean varieties that are resistant to diseases like soybean rust. Research like this benefits American farmers by helping them prepare for crop diseases we see everywhere in the world. For example, knowing how to best grow crops in drier hot conditions will be more crucial in a world increasingly touched by climate change. Without this research, farmers everywhere will be left ill-equipped in the face of a heating planet. Let's talk about rising food prices. What many don't realize is that USAID has helped keep prices of some products in check for Americans for years by supporting the production of specific agricultural commodities like chocolate, coffee, spices, and even rubber in developing countries. How? Well, let's take coffee as an example. The US coffee industry contributes 1.6% to the US GDP and supports nearly 1.7 million American jobs. For years, USAID partnered with US coffee companies and small farmers abroad in places like Africa, central and South America and Indonesia to combat crop diseases and improve coffee supply chains by providing digital tools and training to coffee breeders with a goal of increasing capacity, ensuring that US coffee businesses had access to a stable and high quality supply. With the USAID cuts, this is no longer guaranteed and prices may spike similar programs in the Ivory Coast, Ghana and Ecuador, where cacao grows were in place to ensure America's supply of chocolate flows. Cacao is critical to the candy industry, which supports at least 70,000 American jobs in Maryland. Spice Giant McCormick and Company has benefited from a USAID partnership with the Ohio based Cooperative Business Association or CBI to enhance spice production in Indonesia. This program rehabilitated abandoned plantations, built new ones, and improved yields of vanilla bean, nutmeg, cloves, cinnamon, black pepper, and of course, pumpkin spice. CBIs local affiliate Agri Spice Indonesia supplied well priced spices to a variety of McCormick products. Also in Indonesia, USAID has helped Royal Lestari Utama, a company owned by Michelin, get a loan for its sustainable rubber plantation, which feeds the US tire industry, particularly in South Carolina. These are just a few examples of how by strengthening supply chains and improving productivity in developing countries, USAID funded programs maintain the availability and affordability of key raw materials for US industries whose products then turn out to be cheaper for American consumers. Now let's turn to the impact of USAID cuts on health Up to a million lives were saved in 2016 when a famine in the Horn of Africa was predicted and responded to by the famine early Warning Systems network refused net using data analysis of weather and armed conflicts to predict famines and distribute food aid. This successful system was paid for by USAID and largely run by Chemonics, a private DC based international development firm cuts by Doge have led to the program going offline. Chemonics has furloughed 88% of its US-based workforce Experts warned that without FuseNet humanitarian response efforts will be less effective. Leading to more hunger and instability around the globe and food instability in developing nations has been closely linked to increased file and extremism. In northeastern Nigeria, the jihadist terrorist group, Boko Haram offers meals to attract potential recruits. There are also reports from 2017 of ISIS lowering unaccompanied child refugees out of Syria with food and cash. Since we're talking about ISIS, the USAID cuts affect payments and support to the Kurdish-led Syrian Democratic forces or SDF who manage camps holding ISIS members and their families. Around 10,000 suspected ISIS fighters are imprisoned in 28 jails across northeastern Syria with the largest being the Al-Sina prison in Hisaka. US and Syrian officials have warned that weakened security here could lead to an ISIS revival. Clearly dismantling or even pausing USAID programs not only weakens humanitarian efforts, but also undermines American national security and foreign policy objectives. Let's turn our attention to the impact of USAID cuts on our health. The Trump administration revoked over 10,000 global health grants from USAID and the State Department that played a significant role in global disease prevention. These included outbreak surveillance, which means global collaboration to detect and respond to emerging infectious diseases. The Global Bird Flu Pandemic is currently spreading throughout the US. USAID funded avian flu surveillance in 49 countries around the world that involved collecting samples from farms with high levels of poultry mortality and testing them, notifying farmers of results, monitoring migratory birds and the cross-border poultry trade, and sharing all of that collected data. The USAID cuts have ended that program, which is bad news because the response to bird flu experts warn requires global cooperation. The previous outbreak in 2014 cost the American poultry industry approximately $1.6 billion. The case of an Iowa man who died recently of LASA fever after visiting in West Africa is proof that infectious diseases from other continents are just an international flight away. Uganda is facing its eighth Ebola outbreak with the first confirmed case being a nurse in Kampala who died on January 29th. The outbreak involves the deadly Sudan Street. However, contact tracing and traveler screenings in Uganda have been disrupted. Why? On February 26th, Elon Musk admitted that Doge accidentally canceled USAID's Ebola prevention funding, but assured us once they discovered the error that it was quickly restored. However, as I reported for wired the following day, emergency waivers meant to sustain some of USAID's humanitarian programs, including those focused on Ebola have been ineffective. In part because most USAID staff have been laid off leading to delayed responses to Uganda's Ebola outbreak with America now refusing to meet the moment in regards to Uganda's latest Ebola outbreak. CBS has reported that Russia has launched a mobile lab there to assist with outbreak containment and alarmingly. A new unidentified hemorrhagic fever has emerged in the Democratic Republic of Congo. It is claimed over 60 lives and made over a thousand people ill. Historically, USAID played a key role in identifying pathogens by funding the testing of virus samples in laboratories. Dismantling USAID made this particular illness harder to investigate. One way to combat infectious diseases on the rise around the world, such as cholera and Afghanistan, polio in Yemen and dengue across Central America and the Caribbean is vaccines and they don't just save lives. Vaccines also save money. According to a study by John Hopkins University, every dollar spent on immunization yields a $16 return by reducing the cost of illness treatment. In an additional analysis that considers the wider economic impact of illness. Every dollar spent on vaccinations could save $44 according to the CDC. The USAID funding freeze has halted at least one notable vaccine development program, a $45 million award to the South African Medical Research Council, which aims to end HIV in Africa. In late January, a group of researchers from eight African countries plan to initiate a phase one clinical trial for two experimental HIV vaccines, enrolling dozens of volunteers in South Africa, Kenya, and Uganda. - If this vaccine is found to be safe and efficacious, it could help prevent millions of lives of South Africans that are currently being lost. - There are no FDA approved HIV vaccines in the US yet, so new vaccines that are developed overseas could be promising contenders for the first HIV vaccine in the US too. That study is now on hold indefinitely since the funding no longer exists. According to a report from the Guardian, an estimated 500,000 South Africans could die because of this blow to crucial research, plus the removal of other USAID funds, which assist in the diagnosis and dispensing of antiretroviral medications which keep infected patients alive. Another way to stave off illness is prevention. And in the Amazon basin, USAID was on the forefront of initiatives to control malaria in Brazil, Columbia, Ecuador, Guana, Peru, and Surinam. Through the Amazon malaria initiative, USAID funded tailored interventions to region specific needs with funding cutoff in employees furloughed. The trust in Goodwill USAID has worked hard to grow has been thwarted, and once trust is lost, it's hard to build back. In 2023, there were an estimated 597,000 malaria deaths worldwide, mainly in Africa, with children under five, constituting 76% of the deaths in this region. These food and health program cuts are just a fraction of the story of what USAID dismantling ultimately means for our country and the world. Experts say that abruptly cutting off this work will kill untold numbers of people around the world. There are some people within the US who are unmoved by how disastrous this choice will be for people across the globe. They say our dollars are better spent solely on domestic projects, but this viewpoint fails to consider how interconnected we are and how deeply damaging it will be for Americans if our country is considered a callous rogue state. The US turning its back on humanitarian aid in such a sweeping and abrupt way is also an out and out win for China, a country that is still eagerly pursuing international development projects as a way to win power and influence abroad. They've already invested over $1 trillion in infrastructure as part of their own USAID. Ultimately, the cuts to USAID will have unexpected impacts on the livelihoods of Americans and make the US less safe. The full impact is only beginning to unfold, so stay updated with our continuing coverage on Thanks for watching on the grid. [gentle music]

The Ozempic Shortage Is Over
The Ozempic Shortage Is Over

WIRED

time21-02-2025

  • Health
  • WIRED

The Ozempic Shortage Is Over

By Emily Mullin and Kate Knibbs Feb 21, 2025 10:58 AM The semaglutide shortage has officially ended in the US—which means the GLP-1 drug industry is about to undergo massive changes. PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES The US Food and Drug Administration has determined that semaglutide is no longer in shortage, a move that will have implications for patients taking cheaper, compounded versions of the drug. Semaglutide, the active ingredient in Novo Nordisk's blockbuster diabetes and weight-loss medications Ozempic and Wegovy, has been on the FDA's shortage list since March 2022. Supply could not keep pace with fervent demand for the drug, which reached such dizzying levels of popularity that it transformed Novo Nordisk into one of the world's most valuable companies, with a market capitalization larger than the rest of the economy of its home nation of Denmark. In the intervening years, a lucrative industry of telehealth companies, medical spas, and pharmacies making and selling 'compounded' copies of the medications has arisen. These off-brand copies are sold at a steep discount—sometimes under $100 a vial—compared to the name-brand medications, which can be over $1,000 a month without insurance. At the end of October, the FDA changed the status of all dosages of Ozempic and Wegovy to 'available,' signaling that the end of the official shortage was likely in sight. It took until today, nearly four months later, for regulators to conclude that the drug was widely available enough to remove it from the shortage list. The FDA is giving '503A' compounders, typically state-licensed pharmacies or physician compounders that run smaller operations, until April 22 to cease producing the drug. It is giving '503B' compounders, which are larger outsourcing facilities that follow stricter manufacturing guidelines, until May 22. Under ordinary circumstances, it's not especially contentious when drugs come off a shortage list. But there is reason to believe that players within this industry will push back on this announcement. Drug compounding is a well-established practice; pharmacists are permitted to make copies of medications when there's a drug shortage, or when patients need versions made in specific dosages or without allergens. But the GLP-1 boom has created an opportunity for compounders that has transformed pockets of the industry, with compounding pharmacies producing off-brand duplicates for likely millions of patients. Robert MacArthur, director of pharmacy at the Rockefeller University Hospital, says that once a drug comes off the FDA shortage list, outsourcing pharmacies that make large batches of compounded drugs for health care facilities are not supposed to compound that drug anymore. But for smaller, traditional compounding pharmacies that make drugs for home use, it's a legal gray area. 'They can compound it if the physician writes the prescription for an individual patient and feels there's some compelling reason why that given patient needs that special compounded product,' MacArthur says. In December, the FDA declared that tirzepatide, the other popular GLP-1 drug, was no longer in shortage. The small 503A compounders had 60 days to stop production, while the larger 503B outsourcing facilities had 90 days to wrap up sales. Eli Lilly, which sells tirzepatide under the brand names Mounjaro and Zepbound, sent a flurry of cease-and-desist letters to hundreds of companies selling compounded versions. The compounding industry pushed back, with a trade group filing a lawsuit against the FDA arguing that the drug was actually still hard to access for patients. As of now, the FDA has set deadlines for compounders to stop producing tirzepatide products, but it is not enforcing the deadlines at the moment as the lawsuit is ongoing. Novo Nordisk has already attempted to curb the compounded semaglutide market. It has filed over 50 lawsuits against compounders, telehealth clinics, and medspas, alleging trademark infringement and unfair competition. More recently, it has petitioned the FDA to block production of compounded semaglutide by placing the ingredient on the 'Demonstrable Difficulties for Compounding' (DDC) list, which excludes drugs that are too difficult to compound safely. 'We are pleased the FDA has declared that supply of the only real, FDA-approved semaglutide medicines is resolved, affirming that Novo Nordisk is meeting or exceeding current and projected nationwide demand. No one should have to compromise their health due to misinformation and reach for fake or illegitimate knockoff drugs that pose significant safety risks to patients,' Novo Nordisk executive Dave Moore said in a statement. (The FDA does not evaluate compounded medications for safety, quality, or efficacy before they are sold.) 'I think the compounders have a real valid argument,' MacArthur says. 'This is a drug that is going to prolong people's lives and meaningfully increase the quality of their life. You've already got a whole population of compounders out there that are making it, the product quality is adequate, people are using it, and then you're suddenly going to take it away? I mean, I think that that would create a public uproar.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store