logo
#

Latest news with #LaurenGoode

Why Silicon Valley Needs Immigration
Why Silicon Valley Needs Immigration

WIRED

time20 hours ago

  • Politics
  • WIRED

Why Silicon Valley Needs Immigration

A general view of the UC Berkeley campus, including Sather Tower, also known as The Campanile, as seen from Memorial Stadium in Berkeley, California. Photo-Illustration: WIRED Staff; Photograph:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Expanded deportations, a virtually shutdown asylum process, increased scrutiny of H1-B visa applicants—immigration policy has been overhauled under the latest Trump administration. And, just last week the Trump administration said it would begin revoking the visas of some Chinese students who are currently studying at U.S. schools. On today's episode, we dive into the impacts that these changes could have on the tech industry from the talent pipeline to future innovations. Articles mentioned in this episode: The Trump Administration Wants to Create an 'Office of Remigration' by David Gilbert US Tech Visa Applications Are Being Put Through the Wringer by Lauren Goode You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: A quick note before we begin today. We recorded this episode before the Trump administration's travel ban on citizens from 12 countries from entering the United States and before its proclamation to suspend all new student visas for students enrolling at Harvard University. Although we will get to student visas quite a bit in this episode. How's everybody doing this week? Lauren Goode: I'm good. I just got back from Katie's motherland, Canada. Michael Calore: Oh. Lauren Goode: Yeah. Katie Drummond: Lauren and I were in Vancouver together. Lauren Goode: We were. Katie Drummond: Although I saw her for probably 15 minutes in the span of like five days. I'm doing okay. I also, as we just established, was in Vancouver with Lauren at Web Summit. I took a red-eye home on Thursday night and it was three hours late and so that was a lot. Michael Calore: Yikes. Katie Drummond: And then Lauren, right before we started recording just told me that I have a bobble head, so I'm just grappling with that feedback. Lauren Goode: I did not say bobblehead, I said you had celebrity energy because your head presents well on camera. I don't know. Mike, how are you doing? Katie Drummond: Yeah, how are you doing, Mike? Michael Calore: I'm staying out of this one. Also, I have a gigantic head. I can tell you that I wear a size eight fitted cap, which is the largest size that they make. Katie Drummond: Do you want to know what size I wear? Michael Calore: Yes. Katie Drummond: I have to shop at a specialty hat store. Because my head actually doesn't... I can't wear. Lauren Goode: What is this store called? Katie Drummond: I can't wear normal hats. Lauren Goode: Is it called Bobblehats? Katie Drummond: No, I'm going to look it up. It's from Oddjob Hats. The last hat I bought was called Big Running Hat. Just Big Running Hats. Lauren Goode: Do you also have one called Big Walking Hats? Katie Drummond: Probably. Probably. Lauren Goode: Oh. Michael Calore: Oh, it's too much. Lauren Goode: All right. Michael Calore: Should we get into it? Katie Drummond: Let's do it. Lauren Goode: Let's do it. Michael Calore: This is WIRED's Uncanny Valley , a show about the people, power, and influence of Silicon Valley. Today we're going to be talking about the Trump administration's policies around immigration and the effect that those policies are poised to have on the tech industry. Since day one of the current administration immigration policy has been overhauled, the asylum process was virtually shut down, the obscure Aliens Enemy Act was invoked to deport hundreds of people, and birthright citizenship is being challenged in the US Supreme Court. Visas have been under increased scrutiny. WIRED recently reported how the H-1B visa application process is becoming more hostile, and last week the administration said it would begin revoking the student visas of some Chinese students who are currently studying at US schools. So today we're going to dive into the impacts that these changes could have on the tech industry from the talent pipeline to future innovations. I'm Michael Calore, director of Consumer Tech and Culture here at WIRED. Lauren Goode: I'm Lauren Goode. I'm a senior correspondent at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's global editorial director. Michael Calore: I want to start us off by focusing on how the Trump administration has been handling student visas. Just last week, Secretary of State Marco Rubio announced that the administration would start to, "Aggressively" revoke visas for Chinese students. The State Department said it would focus on students from critical fields and those with ties to the Chinese Communist Party, but also that it would just generally enhance the scrutiny across the board. The vagueness of these guidelines has sent students, parents and universities into an emotional tailspin. What do we make of these latest developments? Lauren Goode: So there were actually two directives that went out last week and I'm sure we're going to hear more, but I think they're both worth noting. The first was that a directive was sent to US embassies around the world telling them to pause any new interviews for student and visitor visas, and that included the F, M and J visas, until further notice. And this whole idea was that it was in preparation for an expansion of social media screening and vetting. So basically the State Department is going to be looking much more closely at students' online activity, social media activity, and consider that as a part of their interview process when they're applying for a visa to the US. That was already a part of the application process, but now it's just going to be expanded. We don't really know what that means. The other was the revoking of visas for Chinese students as you mentioned, Mike. And really I think what this does is it adds another tool to this current Cold War of sorts that we're having with China, whether it's with the tariffs or whether it's measures like these, it's clear that the current administration wants to have the upper hand. And what we've reported at WIRED is that if this continues and the courts allow it, this would all have a significant effect on higher education because roughly a quarter of the international student population in the US is from China. And also, this is something I think a lot of people don't realize, I personally didn't realize until I started doing more research into this, international students often pay full tuition or close to it when they come here into the United States for school, which makes it an economic lifeline for a lot of these universities and also in some ways helps offset the costs for domestic students, US students who are getting scholarships or getting partial reduction in tuition and that sort of thing. I do think in general it's dangerous territory to start targeting students under a specific nationality for these alleged national security reasons. There are going to be questions about how effective it is longterm, but also how this could potentially weaken the US technology sector in the longterm. Katie Drummond: Yeah. And I think, Lauren, you're right to point out these two directives and I think that both got a fair bit of press attention, but I was surprised that the first announcement, this idea that we are going to be doing enhanced social media screening and vetting of international students and people applying for visas to come to the United States, the fact that that was not an international outrage when that was announced is very telling to me in terms of how much is happening in the news in the United States every single day because that is a very chilling announcement to be coming from the Secretary of State in this country. It is a massive free speech issue and really speaks I think to what will be an ongoing theme for WIRED and unfortunately already is, which is just the techno-authoritarian world, country that we now live in where these tools are essentially being weaponized to surveil and monitor not only US citizens, but people who proactively want to live and work and study here, that if you dare have an opinion that is contrary to the opinion of the Trump administration, that you could potentially have your visa revoked or not even be able to qualify for a visa. I think it's also important to note that everything that Lauren just spelled out and that we're talking about is part of this much larger conflict that's been unfolding between the Trump administration and higher education. So you have this Ivy League battle playing out between Trump and Columbia, Trump and Harvard. A lot of that obviously having to do with free speech issues and the Trump administration, again, essentially looking for institutions of higher education to adopt their viewpoint as opposed to being places where a plurality of points of view can be discussed and debated and held. There was already an attempt made to block Harvard from enrolling international students. A federal judge has blocked that for now, but we will have to see where it nets out. And I think regardless of where that one legal decision nets out there is, for so many reasons, this chilling effect where the United States is all of a sudden no longer a desirable destination for students, both at an undergraduate level and a graduate level. You have not only the Trump administration basically going to war with the best colleges in the country, you have them going to war with the actual student visa process, and then you have them going to war with research and science and even blocking already billions of dollars of research funding that is earmarked ostensibly for these institutions and now means that these institutions are much less attractive destinations. So it's not like, oh, a judge reverses a couple of decisions or one decision or blocks one thing from happening and all of a sudden we're in the clear again, this is already very clearly becoming a systemic and longterm crisis for the United States. Michael Calore: And this choking off of talent coming into research institutions and into jobs in the United States is also happening at a moment when China and the US are currently involved in an AI arms race. In January, the Chinese AI company DeepSeek showed off a reasoning model that is demonstrably and seemingly just as powerful as ChatGPT, but was developed for a fraction of the cost. So the US definitely needs to keep bringing in top AI talent, but how are these restrictions on student visas going to potentially shape the growth of the AI industry in the US? Lauren Goode: Yeah, this is something that when the news started to trickle out last week, we at WIRED were thinking, "Okay, this is really in our wheelhouse." We cover AI so closely, we have for years, and automatically the question is what does this mean for the AI race? We ended up reporting a story last week, it was myself, a few other WIRED folks, Kate, Louise, and Will, and some of the sources that we spoke to were pointing out the contradiction that exists here in the White House saying that AI is one of its top priorities and then trying to send the people who are doing this kind of research, this critical research for us here in the United States, home back to their home countries, or not letting them into the first place. And it's some US colleges, I would say probably a fair number of them, international students do make up the majority of doctoral students in departments like computer science. One of our colleagues, Kate Knibbs, talked to someone at the University of Chicago who said that foreign nationals accounted for 57% of newly enrolled computer science Ph.D. students last year. We know that immigrants have founded or co-founded nearly two thirds of the top AI companies in the United States. That's according to a 2023 analysis by the National Foundation for American Policy. And this is something that's been going on for a long time. I had this interesting conversation with a well-known economist last week. His name is William Lazonick. I was asking him his thoughts on this crackdown on student visas, and he made an important observation, which is that foreign students pursuing those STEM careers have actually been critical to the very existence of graduate programs in those fields. And some of this is cultural. Back in the 1980s, there was this big shift that was happening in the US around money basically. It was the era of Reaganomics and great is good, and American students were gravitating towards careers in finance. At the same time, Lazonick said, there were significant advancements happening in microelectronics and computing and biopharmaceuticals, and that opened the window for foreign students to say, "We're going to study stem." So what we are potentially on the brink of right now by thwarting or revoking these visas for foreigners could literally affect the outcome of American technology and science development for the next several decades. Katie Drummond: And particularly at a moment where, as you said, we're in this Cold War with China, we're in this AI arms race. You hear it from the administration, you read about it in WIRED, you hear about it from Sam Altman, other leaders of the AI industry, this like, "We must beat China. We must beat China." And then stuff like this happens and you feel like, "Let's just hand it to them. Let's just give it to them." Because we are basically doing that by disincentivizing not only Chinese students, but just brilliant people from all around the world, from coming here, bringing their intellect here, bringing their ideas here. We're basically telling them, "Go somewhere else. Maybe go to China." And something I did find fascinating in that reporting, Lauren, was that the vast majority of PhD students from China and India actually typically intend to stay in the US after they graduate. While the majority of people from other countries, places like Switzerland and Canada, report actually planning to leave, maybe they want to go back to their home country, maybe they want to go somewhere else, but it's rejecting the people who are most committed to staying here and to contributing to new technology in the United States is a certain kind of choice. And so other countries are already trying to take advantage of that. Hong Kong is already trying to attract Harvard students. The UK is setting up scholarships. There's a lot going on outside the United States in terms of basically trying to make the brain drain happen for us. Our loss is all of their gain. But when you put it in the context of this AI race and the US and China of it all, it feels like what we are doing is distinctly disadvantageous for us in this moment. Unless you both disagree and think I'm missing something. Lauren Goode: No, we always say on this podcast, it would be nice if we vehemently disagreed with each other because it would create tension. But I think in this case, we are all aligned on this. Michael Calore: Yeah. This scrutiny over foreign nationals, it doesn't just end at academia, of course. It also extends into the workforce here in the US and work visas. Lauren, you recently reported on how the process to obtain an H-1B visa has become more difficult recently. Can you tell us a little bit about what H-1B visas are and why they matter so much to the tech industry in particular? Lauren Goode: Sure, yeah. So H-1B visas are work visas that are granted for specialty occupations. They're typically valid for three years. They can be extended in some cases. This type of visa was first introduced in 1990 as part of a broader immigration act. And the idea is that it's supposed to help employers hire people with specialty skills that they might not otherwise get from the talent pool that already exists in the US. And the H-1B is a bit of a controversial visa. Even just saying, so you can hire people outside of the US because there are people who don't have that skillset here, naturally prompts the question for some people, "Wait, why are we not educating and training people in the US to have those jobs?" But basically what I was starting to hear from immigration attorneys who I was speaking to is that the requests for evidence, RFEs, had shot up since Trump took office in January of this year. Typically, when a person is applying or petitioning for an H-1B, their lawyer submits a bunch of paperwork on their behalf and that typically will include resumes, awards, letters of prestige, letters of recommendation from colleagues and friends and that sort of thing. You basically have to put together this packet to prove that you're worthy of this specialty visa. And then sometimes it would get bounced back and USCIS would ask for more requests for evidence. In this case, a lot of visa applications are being sent back. There are a lot more RFEs or requests for evidence for applicants. And that's something that four different immigration attorneys I spoke to said they're seeing happening. It's also not just happening across H-1B. There's another type of visa called the O-1 Extraordinary Ability visas. Once again, this is a specialty visa. A lot of tech entrepreneurs, engineers, and founders alike will come here under the O-1 visa and folks in that world are starting to say that they're getting pushback on their applications as well. All of this, it's instilling fear amongst some entrepreneurs and tech workers in the Valley, and it's creating a climate of uncertainty where people who seemed so committed and excited to come here and build their companies here and contribute to the technological environment here are now rethinking that because of what's going on with visa applications. Katie Drummond: Ugh. That is so bleak. 66% of people working in tech in Silicon Valley are born outside of the US. That is just an astonishing number to think about that being at risk. Lauren Goode: Yep. We're talking about the rank and file in a sense, but also just look at some of the CEOs- Katie Drummond: Yeah, look at the leadership. Lauren Goode: Of the companies we're talking about. Sundar Pichai and Satya Nadella, and I think the most... Should we talk about the most obvious one? Katie Drummond: I was going to say, just look at Elon Musk. Lauren Goode: Yes. Katie Drummond: What an international success story he is. Lauren Goode: Yes. Katie Drummond: What a success he has been for the United States of America. I will say, the H-1B visa program is not perfect. It's certainly been criticized for not being a fair system or a fair lottery, but despite the fact that this is an imperfect system, none of this actually feels like an approach to fix any of these problems or challenges, it's more just creating extra adversity and uncertainty around a process that's already very lengthy and very expensive. Michael Calore: So these challenges to the visa application have ramped up recently, but we're already seeing the effects of this, right? Lauren Goode: Yeah, this is something that's harder to quantify right now because these visa policies are just getting put in place. Everything's just changing. But I think we can qualify it by saying that the folks that we're talking to in Silicon Valley who are either here on a visa or they were hoping to stay on an extended visa or they were thinking of maybe coming here and we're working with attorneys to get that process started are now just reconsidering everything. You're already throwing yourself into a pretty uncertain world when you decide to launch a startup. You're choosing hard mode for yourself when you do that. So now throwing this uncertainty into the mix and thinking like, "Am I actually still going to be able to be here in three years if that's how long it takes me to actually make a product or build up a profitable business or raise my next funding round or something?" And if you can't see beyond that, I don't see how you realistically say like, "Oh, the US seems like a good bet right now." Katie Drummond: It just underscores how systemic and long-lasting this is going to be. Even if this were six months of bad federal policy and somehow the administration wakes up overnight and flips a switch and we see a lot of this pressure and additional scrutiny and adversity around immigration, around H-1B visas ease, there has already been so much damage done. We are going to feel this in this country for such a long time. Michael Calore: One of the thing about immigration policy that we have to talk about is something that our colleague David Gilbert has reported on for WIRED, and that is, as part of a reorganization of the State Department, the Trump administration is creating an office of remigration. And in very simple terms, remigration is an immigration policy embraced by extremists that calls for the removal of migrants including non-assimilated citizens. What do we make of this? Katie Drummond: So I talked a little bit earlier about being surprised that Marco Rubio announcing that enhanced social media scrutiny. I was surprised that that wasn't more of an outrage, that didn't get more coverage. This is even more extreme in that context, and it is a truly shocking development in this administration's war on anyone who is not a white American. That is basically what this is. I was shocked when I read this story last week and realized that this should be front page news for every news organization in the United States, and somehow it just wasn't. Lauren Goode: So the whole idea behind this is that they want to create a white ethnostate in this part of the world. Katie Drummond: That is our understanding of it, yeah. There is a long history to the idea of remigration and it really comes together through the lens of mega, it was present in the administration's first term as well. You had the Muslim ban, you had this idea of building a border wall, and I think what's so different this time from 2016, there's a lot that's different this time, I think big picture as we have seen, what's different is that this time the administration really means business. They're buttoned up, they're here to get the job done. And so it's the speed and the intensity at which these ideas, this very racist idea of remigration is going from just being something that's done in a scattershot way that is now showing up as a tactical specific policy proposal that is being released in official government documents. It's just a very different kind of approach and it feels much more real. It is much more real. And it's happening so quickly and amid I think so much other news that people are just not seeing that it's happening, and that's really scary. Lauren Goode: And what happens too I think is that there are all different kinds of immigration policies we're talking about here and if you're not paying close attention you might conflate them. There's a difference between the asylum process being shut down and the Aliens Enemy Act being overhauled with what may be going on with student and foreign visitor visas, Extraordinary Ability visas, which is different from what's being proposed with this remigration document. And a lot of it is happening under the guise of, "This is better for national security." There are of course going to be some instances in which that is true. For example, Stanford Review reported, I think it was a few weeks ago now, that they'd become aware of Chinese nationals actually trying to spy on Stanford University and its students. They'd purported to be other students. This sort of thing does happen, there are nations that are our adversaries that want to get information from the United States and wield it in nefarious ways, but for the most part, the Trump administration is putting immigrants in this giant bucket and creating this world in which they're all a threat to the United States. And that is absolutely not the case. Michael Calore: Yeah, these policies are going to obviously shape the culture of this country and they're going to shape the business that is done in this country. But of course, they are absolutely going to shape the technology industry. So let's take a break and when we come right back, we'll talk about the effects that these policies will have on tech. Welcome back to Uncanny Valley . We've been talking about the Trump administration's immigration policies and how they could shape the future of tech development in the us, and I'm curious to know how tech companies and workers have been reacting to these measures so far. Lauren Goode: I would say the number one thing I've heard directly from folks is that they are scaling back on their travel to conferences, whether they're academics or tech workers. And that may have a little bit more to do with what has been going on in some intermittent cases at the border, of people getting detained at the border. But also people are thinking about the status of their visa right now and whether they're an American citizen or they're here on a visa. Tech conferences and academic conferences are just a part of this world. Katie and I were just at one in Vancouver. And so if you have concerns about being let back into the United States after traveling, you may decline to go to one. And the same goes for universities. I think Brown University urged its international staff and students to postpone any plans to travel outside of the US out of an abundance of caution. Katie Drummond: It's interesting to think about the flip side of that because for most of the tech industry and the human beings who work in that industry, this is a very scary thing. It's affecting how they do their jobs, it's affecting whether or not they travel. And then you have the flip side of it, which is where there are certain parts of the tech industry who are really benefiting from these new policies. And I think Palantir is probably the best example of that. So Palantir is the brainchild of Peter Thiel, obviously a mega donor to the GOP party. And Palantir is really making it rain with the Trump administration, and they are benefiting tremendously from these policies and from DOGE efforts and administration efforts to centralize and unify data about American citizens and about immigrants to the administration. God knows what you could use all of that information for once it's centralized. Palantir recently won a $30 million no-bid contract to build ImmigrationOS, which essentially provides real-time data about the whereabouts of migrants and about deportations. Palantir obviously has worked with the US government for a very long time. They've had a contract with ICE since 2011, so that's almost 15 years ago. But we are really seeing the surveillance state that Palantir helps support grow exponentially and grow very quickly as a result of the administration's aims around immigration for one thing, but also just their aims to basically stand up and run an authoritarian state that would impact not only immigrants but US citizens as well. Michael Calore: So some tech companies are obviously seeing a paycheck opportunity in these immigration policies, but we can't say that the tech industry is operating as any kind of block, like they're not lockstep ideologically aligned with the immigration policies. And a lot of key tech leaders have been outspoken about the fact that they're not too happy with these policies, right? Lauren Goode: Yeah. It's honestly a little bit confusing. Someone like Elon Musk has in the past been in support of the H-1B. He employs more than 1,000 people on that type of visa. He even used it himself in his early years in the US, and he has in the past tweeted in support of immigrants being in Silicon Valley and contributing to the economy here. More recently though, he has called for a reform on it, and he's not alone in that. Same with Marc Andreessen, obviously one of the most vocal people, influential people in Silicon Valley. Surprisingly, they've got some interesting bedfellows. The Democrat Ro Khanna of California, Vermont's Bernie Sanders, they're also calling for a reform of the H-1B program. It goes back to what Katie was saying earlier, that there have been some critiques of H-1B. There's been a lot of backlash to the program, and it's hard to know sometimes whether it's coming from this kind of vitriolic or potentially racist place around how people feel about immigrants versus, "No, I'm actually in support of this because it's good for the US economy and the tech industry, but the process is broken." Katie Drummond: To me right now what we're looking at in the year 2025 is just part of this larger trend of tech leaders staying silent or muting their criticism or maybe posting something on X, but largely staying silent when it comes to politics, when it comes to political issues, at least publicly. We don't know what's happening behind the scenes, what kinds of lobbying efforts are going into trying to sway the administration one way or another when it comes to H-1B visas, when it comes to the importance of brilliant people from around the world being able to study and work in the United States and in the tech industry. But publicly for sure, we are not seeing that really robust resistance on the part of the tech industry. And that is certainly strategic because these guys know that this time the administration means business, they need to play ball, they need to work with this administration. And so we can only hope that behind the scenes there are more vigorous discussions happening than what we're seeing play out publicly. Michael Calore: It's distressing to me that the disconnect is so loud here because we really have to underscore how important of a positive role immigration has played in the growth of the tech industry. And in Silicon Valley in particular, like Lauren you were talking about earlier, some of the largest companies like Google and Microsoft have all had either founders or co-founders or CEOs who are first or second-generation immigrants. And if you look at a list right now of the country's current startups that are worth more than a billion dollars, more than half of them have an immigrant founder. Yeah. So the longterm stakes of keeping talented researchers and engineers and businesspeople out of the country seem deeply, deeply consequential. Lauren Goode: It's also just not a zero-sum game. If the tech industry continues to grow, presumably there would be enough room for having high-skilled American workers and high-skilled foreign nationals working together. Michael Calore: As it always has been. Okay, let's take another break and we'll come right back with recommendations. Thank you both for a great conversation. We are going to shift gears and talk about something completely different, which is our own personal desires and loves. We're going to do recommendations. Who wants to go first? Katie Drummond: My recommendations. It's been a busy time, so I feel like I'm a little bit limited on hobby activities, but a book I just finished that I do recommend, Barry Diller's memoir. If you're not familiar with Barry Diller, I believe he is now the chairman of IAC. But a long-time executive, invented the modern-day Hollywood approach to movie-making. It was great, so I highly recommend that. But my other recommendation is that last night I was thinking about what to have for dinner, and I made an omelet, and I haven't had an omelet in a while. The omelet had a red pepper, it had spinach, and it had shredded cheese, and it was just a really nice reminder if you're thinking about what to have for dinner tonight, a nice omelet, some toast with french butter, a can of seltzer, you might just be all set. That and a book. Michael Calore: Lauren, what is your recommendation? Lauren Goode: My recommendation is after you make your breakfast for dinner, you should check out the Brazilian film I'm Still Here. When I was flying home from Vancouver last week I started watching it on the plane and did not finish it. It was one of those things where I went home, unpacked, and then immediately bought the movie because I was like, "I need to finish watching it." Katie Drummond: Wow. Lauren Goode: And I loved it so much that I knew I wanted to own it. It's beautiful. It's beautifully done. It's based on a true story of a Brazilian congressman who is abducted during the military dictatorship. In Brazil that was at its peak in 1970, 1971. And really it's about his family too. It's about his wife, who's this incredibly strong woman in character, and their five children. And because it's the 1970s, the world is just different. Technology is limited, they have a family camcorder and that's really it. And the kids are just running around in their swimsuits all day long and things just feel simpler, but also complicated. And there are these scenes in the beginning where people are basically being rounded up by the military and you hear families having these conversations of, "Should we stay or should we go?" It's chilling, but it's a beautifully done film and so I highly recommend I'm Still Here. All right, what's your recommendation? Michael Calore: I'm here to tell the people to watch Mountainhead. This is a fiction film that feels closer. Lauren Goode: Just when I thought we were getting away from the tech bros. Michael Calore: It's a fiction film from Jesse Armstrong who is the creator of Succession. This is a movie that he did for HBO. We're just calling it HBO. Everybody deal with it. It's a bro fest. It's about four tech founders who gather at the Mountain retreat for a social weekend to catch up. There's a strict no deals policy, but of course that policy goes by the wayside as soon as things start happening. The four principal actors are Steve Carell, Jason Schwartzman, Cory Michael Smith, and Ramy Youssef. And if you liked the witty back and forth and the weird absurdist drama in Succession, there's plenty of that here. It's also very much of the moment because the backstory that happens during the film is that the world is embroiled in a bunch of political chaos because of AI DeepFakes on social media that are very inflammatory politically. Lauren Goode: Great. So also based on a true story is what you're saying. Michael Calore: Yeah. Katie Drummond: I do want to watch that. I would like to watch it. I will watch it. Michael Calore: It's not exactly a good time, but it is a rewarding time. Lauren Goode: I also will watch Mountainhead, but I'm actually wondering, and Katie, while we have you on the podcast, if I can just ask you, does that count as work? Because I interview those- Katie Drummond: No. Lauren Goode: Bros all the time, and so I can just take two hours during the day and watch that, right? It's work. Katie Drummond: Abso-fucking-lutely not. Lauren Goode: All right, we answered that. Katie Drummond: We sure did. Lauren Goode: Ooh. Michael Calore: Thanks for listening to Uncanny Valley . If you like what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, write to us at uncannyvalley@ Today's show is produced by Adriana Tapia and Kyana Moghadam. Amar Lal mixed this episode. Jake Lummus was our New York Studio engineer. Matt Giles fact-checked this episode. Jordan Bell is our executive producer. Katie Drummond is WIRED's global editorial director. And Chris Bannon is the head of Global Audio.

Let's Talk About ChatGPT and Cheating in the Classroom
Let's Talk About ChatGPT and Cheating in the Classroom

WIRED

time23-05-2025

  • Entertainment
  • WIRED

Let's Talk About ChatGPT and Cheating in the Classroom

Photo-Illustration: WIRED Staff/Gety Images All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. There's been a lot of talk about how AI tools like ChatGPT are changing education. Students are using AI to do research, write papers, and get better grades. So today on the show, we debate whether using AI in school is actually cheating. Plus, we dive into how students and teachers are using these tools, and we ask what place AI should have in the future of learning. You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: Hey, this is Mike. Before we start, I want to take the chance to remind you that we want to hear from you. Do you have a tech-related question that's been on your mind or just a topic that you wish we talk about on the show? If so, you can write to us at uncannyvalley@ and if you listen to and enjoy our episodes, please rate it and leave a review on your podcast app of choice. It really helps other people find us. How's everybody doing? How you feeling this week? Katie Drummond: I'll tell you how I'm feeling. It's Katie here. My vibe levels are up. I'm feeling really good. I was at Columbia University earlier this week with five of our fantastic editors and reporters at WIRED because we were honored at the Columbia Journalism School this week for our politics reporting. And so we got dressed up, I gave a speech and it was so wonderful to have a minute to sit back and take a breath and think about all of the journalism we've done in the last several months and celebrate that. And it was also really, really cool to just see and talk to journalists who were graduating from journalism school and feel their energy and their excitement and their drive to do this work. Because I think, as you guys know, and you probably agree, we're all quite tired. Lauren, how are you? Lauren Goode: When you said, "Because we're tired." I wasn't sure if you meant we're just tired in this moment or we are existentially tired because I am a little tired in this moment, but I am not existentially tired. I'm here for the fight, Katie. Katie Drummond: Oh, I'm so glad to hear that. Lauren Goode: Yeah. Katie Drummond: Yeah, I'm tired in this moment. I just think it's so nice to spend some time with a couple hundred people who are new to this and just so excited to get down to business. It was very cool. Michael Calore: How much ChatGPT use is there at Columbia University in the journalism department, do we think? Lauren Goode: Good question, Mike. Katie Drummond: I really hope very little. Michael Calore: Me too. For the sake of us all. This is WIRED's Uncanny Valley , a show about the people power and influence of Silicon Valley, and today we are talking about how AI tools like ChatGPT are changing education from middle school to graduate school. More and more students are using generative chatbot tools to gather information, finish assignments faster and get better grades, and sometimes just write things for them. Just this month, there has been a ton of reporting and discourse on this trend, and some of it has been fairly optimistic, but a lot of it has also been critical as one user on X put it, "The kids are cooked." Lauren Goode: The kids are all right. Katie Drummond: Which X user was it? I can think of a few. I'm just curious. We don't actually know. Michael Calore: So on this episode, we're going to dive into how students are using ChatGPT, how professors are using it, whether we think this trend is, in fact, cheating when the students use it, and what AI's place could be in the future of learning. I'm Michael Calore, director of consumer tech and culture here at WIRED. Lauren Goode: I'm Lauren Goode. I'm a senior correspondent at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's global editorial director. Michael Calore: So before we dive into what has been happening with AI and students potentially using ChatGPT to cheat in their coursework, I want to have all of our cards on the table. Did either of you cheat in high school or in college? And if so, how? Katie Drummond: I feel like I should go first here because I'm the boss and I want to set Lauren up for success in her answer. I did not cheat in college. I was a very serious person in college. I was getting an undergraduate degree in philosophy, which felt like a very serious thing to be doing at the time. So I was totally above board. And also, as I was thinking about this earlier, this was in the early 2000s and it wasn't, I don't think, or wouldn't have been particularly easy to cheat at philosophy back then, whereas interestingly, it would be pretty easy to cheat at philosophy now. You're reading a lot. You're writing a lot of essays. It's hard to imagine how I would've effectively cheated, but I didn't cheat. I did cheat in high school though. Everybody cheated all the time. I'm not saying I cheated all the time. I'm not going to answer that question, but I did cheat. I specifically remember we had graphing calculators and we would program equations and answers into the calculators using special code so that teachers, if they went through our calculators, they wouldn't be able to tell that it was cheats. But we went to pretty great lengths to cheat on math exams, which is so stupid because I would've done great on the math exam regardless, but there was just something about being able to get away with it. Lauren Goode: Do you feel like a weight has been lifted from you now that you have confessed? Katie Drummond: No, I don't care. Look, I think that most students, at least in middle school and high school, dabble with cheating, and so I have no shame. What are they going to do? Strip me of my high school diploma. Good luck. Lauren Goode: Yeah, it's kind of a rite of passage. Katie Drummond: Exactly. Lauren Goode: I was very similar to Katie in that I did not cheat in college. In high school though, I remember reading Cliff's Notes for some book assignments. My best friend and I also did some light cheating in high school because the first initial of our last names wasn't that far apart, and it was a small school as well, so she was often sitting in front of me and I was directly behind her. And we had a tapping scheme where we'd tap our pencils during Scantron tests. Katie Drummond: Wow. Michael Calore: Oh, like sending secret messages to each other. Lauren Goode: Yeah, yeah. So if she was on question 13, she would sort of slide her Scantron to the side of the desk and so that you could see which number, which question number 13, and then the person who had the answer would tap their pencil a corresponding number of times to be like, answer A, answer B, answer C. Anyway, I don't want to implicate her. Totally. She's an adult now with a career and two grown children, and I'm not sure if the statute limitations has expired on this grand felony from Notre Dame Catholic High School. So maybe we can scrap that from the record. Thank you very much. Mike, did you cheat? Michael Calore: No, I was a total goody-goody, like super-duper do everything by the book Eagle Scout kind of kid. Didn't cheat in high school. I did encounter a course in college that I had a really hard time keeping up with. It was the 19th century British novel, and the reading list was absolutely brutal. It was one super long, boring book every week. And I mean, there was some good stuff in there, like Jane Eyre and Frankenstein. And then there were absolutely terrible books in there, like Barchester Towers and The Mayor of Casterbridge. So I learned the art of the shortcut. I would zoom in on one chapter and I would read the Cliff's Notes, and then I would read that chapter and I would be able to talk about that chapter in depth on a test. Katie Drummond: Oh, that's very smart. That's smart. But not cheating. Michael Calore: Not necessarily cheating. I don't consider Cliff's Notes to be cheating. I'm one of those people. Lauren Goode: Why not? Michael Calore: Well, because you're still actually doing the work and comprehending. And I think some of the examples that we're going to talk about don't even have that step in them. They just sort of skip over all the learning, Lauren Goode: Yeah, but you're not understanding the full context of where that author fits into a certain category of other writers. Katie Drummond: Lauren, I think that what you're trying to do right now is distract both us and our audience from your Scantron felony, when in fact, it seems like Mike is the most innocent party here. I just need to say. Lauren Goode: Fair enough. Michael Calore: At least I did the reading. All right, well we've all come clean. So thank you for all of that. And we can acknowledge that, of course, cheating is nothing new, but we're talking about it now. Because of the use of AI tools like ChatGPT by students and how it has exploded in recent years. It's become a topic of debate in both the tech and education spheres. So just to get a sense of the scale of how much students are using AI, one estimate by the Digital Education Council says that around 86% of students, globally, regularly use AI. During the first two years that ChatGPT was publicly available, monthly visits to ChatGPT steadily grew and then started to dip In June when school gets out. Katie Drummond: 86%. Michael Calore: 86%. So yeah, I've used AI in my school. Katie Drummond: That is an astonishing figure. Michael Calore: So the appeal of something like ChatGPT, if you've used it, you understand why it would be useful to students. The appeal of using it is pretty obvious. It can write, it can research, it can summarize, it can generate working code, but the central question remains. Is using ChatGPT in schoolwork cheating? Where do we draw the line here? Katie Drummond: So I don't think that there's a black and white answer, which is good for the length of this episode, but I think that that informs my overall view about AI and education, which is that this technology is here, you can't hide it, you can't make it go away. You can't prevent teenagers and young adults from accessing it. So you need to learn to live with it and evolve and set new rules and new guardrails. So in that context, I think there are a lot of uses of AI for students that I would not qualify as cheating. So getting back to the Cliff Notes debacle, I think using AI to summarize information, like say you're coming up with notes to help you study and you use AI to summarize information for you and come up with a study guide for you, I think that's a fantastic use of AI and that would actually just save you a lot of time and allow you to focus on the studying part instead of the transcription and all of that stuff. Or honestly to me, using it to compile research for you that you'll use to then write a paper, I think use cases like that are a natural evolution of technology and what it can help us do. I think for me, where AI becomes cheating is when you use AI to create a work product that was assigned and meant to come from you and now doesn't. But Lauren, I'm curious to hear what you think. Lauren Goode: Well, it would make for a really good podcast if I vehemently disagreed with you right now. I think we're pretty aligned on this. Earlier this week I happened to be at the Google I/O conference, which is their annual software conference, and it's a huge AI fest. It's an AI love fest. And so I had the opportunity to talk to a bunch of different executives and many of these conversations were off the record. But after we got through the round of like, "Okay, what's the latest thing you announced?" I just said, "How are you feeling about AI and education? What's your framework for thinking about this?" And one of the persons said, "Are you using it to replace the goal of the exercise?" And it's a blurry line, but it's, I think, a line to draw in terms of whether or not you're "cheating". So if you're going to ask that question, you first have to determine the goal and then you have to determine what the product is. The product of an education is not actually test scores or assignments. The product is, are you learning something from doing it? So if you're using AI to generate an output, it's understandable that you would say, "Does this output demonstrate cheating?" But the cheating actually happens during the generative part of generative AI. And once again, that's very fuzzy, but I think that if the goal of an assignment is not just turn this thing into your teacher's desk on Tuesday morning, goal of it is, did you learn something? And so if you're using AI to cheat through the learning part, which is like I think what we're going to be discussing, then yes, I guess that is cheating. Broadly, the use of these tools in education, just broadly speaking, doesn't scream cheating to me. Katie Drummond: I think that's a really interesting way of thinking about it actually. I like that a lot. Thank you person at Google. Michael Calore: Yeah. If the assignment is to write 600 words about the French Revolution, then that's obviously something that ChatGPT can do for you pretty easily. But if the assignment getting knowledge into your brain and then being able to relay it, then to prove that you've memorized it and internalized it and understand it, then I think there's a lot of things that ChatGPT and tools like it can do for you. Like you mentioned Katie, you can use it to summarize books, you can use it to help you with the research. One of the most ingenious uses that I've seen is people ask it to generate practice tests. They upload their whole textbook and they say, "I have a test on Friday on chapters four and six, can you generate five practice tests for me to take?" And then that helps them understand what sort of questions they would be getting and what kinds of things keep popping up in all of those practice tests, those things are probably the most important things to learn. So let me quickly share a real world example of AI cheating to see what you think about it. The most infamous case perhaps comes from a recent New York Magazine story about students using ChatGPT for their coursework. The story starts off with Chungin Roy Lee, a former Columbia student who created a generative AI app explicitly to cheat on his computer science schoolwork. He even ended up using it in job interviews with major tech companies. He scored an internship with Amazon after using his AI helper during the interview for the job. He declined to take that job, by the way. So that's pretty ingenious. He's coding an app. He's using generative AI to make an app to help him cheat on things and get jobs. Do you think that the "ingenuity" behind building something like this is cheating? Do we think that his creation of this AI tool carries any merit? Lauren Goode: I mean, it's so clearly cheating because the intent is to cheat. If we go back to that question of, are you using it to replace the goal of what you're trying to do? His goal is cheating. His goal is like, "Look how clever I am and then I'm cheating." Lee strikes me as the irritant in the room. What it's doing is bubbling to the surface, a lightning rod topic that is much bigger than this one specific app. Katie Drummond: Well, and he, in April of this year, something I thought was interesting just in terms of he's the irritant, but how many complicit irritants does he have on his team? Lee and a business partner raised $5.3 million to launch an app that scans your computer screen, listens to the audio and then gives AI generated feedback and answers to questions in real time. And my question when I read that was, "Who are these investors? Who are these people?" The website for this company says, "We want to cheat on everything." And someone was like, "Yes, I am writing a check." Of course it's cheating. They say that it's cheating. I mean, I appreciate the creativity. It's always interesting to see what people dream up with regards to AI and what they can create. But using AI to ace a job interview in real time, not to practice for the job interview beforehand, but to, in real time, answer the interviewer's questions, like you're setting yourself up and your career up for failure. If you get the job, you do need to have some degree of competence to actually perform the job effectively. And then I think something else that I'm sure we'll talk about throughout this show is it's the erosion of skill. It's knowing how to think on your feet or answer tough questions or engage with a stranger, make small talk. There are all of these life skills that I worry we're losing when we start to use tools like the tools that Lee has developed. And so of course I think there are interesting potential use cases for AI like interview prep or practice is an interesting way to use that technology. So again, it's not about the fact that AI exists and that it's being used in the context of education or a job interview, but it's about how we're using it. And certainly in this case it's about the intent. Is someone who is developing these tools specifically with the intention of using them and marketing them for cheating? And I don't like that. I don't like a cheater, other than when I cheated in high school. Michael Calore: Well, we've been talking a lot about ChatGPT so far and for good reason because it's the most popular of the generative AI tools that students are using, but there are other AI tools that they can use to help with their coursework or even just do their schoolwork for them. What are some of the other ones that are out there? Lauren Goode: I think you can literally take any of these AI products that we write about every day in WIRED, whether it's ChatGPT, whether it's Anthropic's Claud, whether it's Google Gemini or the Perplexity AI search engine, Gamma for putting together fancy decks. All of these tools, they're also sort of highly specialized AI tools like Wolfram or MathGPT, which are both math focused models. And you can see folks talking about that on Reddit. Katie Drummond: Something interesting to me too, is that there are now also tools that basically make AI detectors pretty useless. So there are tools that can make AI generated writing sound more human and more natural. So you basically would have ChatGPT, write your paper, then run it through an additional platform to finesse the writing, which helps get that writing around any sort of AI detection software that your professor might be using. Some students have one LLM write a paper or an answer, and then they sort of run it through a few more to basically make sure that nothing can show up or nothing can be detected using AI detection software. Or students, I think too, are getting smarter about the prompts they use. So there was a great anecdote in this New York Magazine story about asking the LLM to make you sound like a college student who's kind of dumb, which is amazing. It's like maybe you don't need the A plus, maybe you're okay getting the C plus or the B minus. And so you set the expectations low, which reduces your risk, in theory, of getting caught cheating. Michael Calore: And you can train a chatbot to sound like you. Katie Drummond: Yes. Yeah. Michael Calore: To sound actually like you. One of the big innovations that's come up over the last year is a memory feature, especially if you have a paid subscription to a chatbot, you can upload all kinds of information to it in order to teach it about you. So you can give it papers, you can give it speeches, YouTube videos of you speaking so it understands the words that you'd like to use. It understands your voice as a human being. And then you can say, "Write this paper in my voice." And it will do that. It obviously won't be perfect, but it'll get a lot closer to sounding human. So I think we should also talk about some of the tools that are not necessarily straight chatbot tools that are AI tools. One of them is called Studdy, which is study with two Ds, which I'm sure the irony is not lost on any of us that they misspelled study in the name, but it's basically an AI tutor. You download the app and you take a picture of your homework and it acts like a tutor. It walks through the problem and helps you solve it, and it doesn't necessarily give you the answer, but it gives you all of the tools that you need in order to come up with the answer on your own. And it can give you very, very obvious hints as to what the answer could be. There's another tool out there called Chegg, C-H-E-G-G. Katie Drummond: These names are horrific, by the way. Just memo to Chegg and Studdy, you have some work to do. You both have some work to do. Lauren Goode: Chegg has been around for a while, right? Katie Drummond: It's a bad name. Lauren Goode: Yeah. Michael Calore: It has been, it's been very popular for a while. One of the reasons it's popular is the writing assistant. Basically you upload your paper and it checks it for grammar and syntax and it just helps you sound smarter. It also checks it for plagiarism, which is kind of amazing because if you're plagiarizing, it'll just help you not get caught plagiarizing and it can help you cite research. If you need to have a certain number of citations in a paper, oftentimes professors will say, "I want to see five sources cited." You just plug in URLs and it just generates citations for you. So it really makes that easy. Katie Drummond: I mean, I will say there are some parts of what you just described that I love. I love the idea of every student, no matter what school they go to, where in the country they live, what their socioeconomic circumstances are, that they would have access to one-on-one tutoring to help support them as they're doing their homework, wherever they're doing it, whatever kind of parental support they do or don't have. I think that that's incredible. I think the idea of making citations less of a pain in the ass is like, yeah, that sounds good. Not such a huge fan of helping you plagiarize, right? But it's again, it's like this dynamic with AI in education where not all good, not all bad. I've talked to educators and the impression I have gotten, and again, this is just anecdotal, but there is so much fear and resistance and reluctance and this feeling among faculty of being so overwhelmed by, "We have this massive problem, what are we going to do about it?" And I just think that too often people get caught up in the massive problem part of it and aren't thinking enough about the opportunities. Michael Calore: Of course, it's not just students who are using AI tools in the classroom, teachers are doing it too. In an article for The Free Press, an economics professor at George Mason University says that he uses the latest version of ChatGPT to give feedback on his PhD student's papers. So kudos to him. Also, The New York Times recently reported that in a national survey of more than 1800 higher education instructors last year, 18% of them described themselves as frequent users of generative AI tools. This year, that percentage has nearly doubled. How do we feel about professors using generative AI chatbots to grade their PhD students papers? Lauren Goode: So I have what may be a controversial opinion on this one, which is just give teachers all the tools. Broadly speaking, I don't think it is wrong for teachers to use the tools at their disposal, provided it aligns with what their school system or university policies say if it is going to make their lives easier and help them to teach better. So there was another story in The New York Times written by Kashmir Hill that was about a woman at Northeastern University who caught her professor using ChatGPT to prepare lecture notes because of some string of a prompt that he accidentally left in the output for the lecture notes. And she basically wanted her $8,000 back for that semester because she was thinking that, "I'm paying so much money to go here and my teacher is using ChatGPT." It currently costs $65,000 per year to go to Northeastern University in Boston. That's higher than the average for ranked private colleges in the US, but it's all still very expensive. So for that price, you're just hoping that your professors will saw off the top of your head and dump all the knowledge in that you need, and then you'll enter the workforce and nab that six-figure job right off the gate. But that's not how that works, and that is not your professor's fault. At the same time, we ask so much of teachers. At the university level, most are underpaid. It is increasingly difficult to get a tenure-track position. Below the university level, teachers are far outnumbered by students. They're dealing with burnout from the pandemic. They were dealing with burnout before then, and funding for public schools has been on the decline at the state level for years because fewer people are choosing to send their kids to public schools. Katie Drummond: I mean, I totally agree with you in terms of one group of people in this scenario are subject matter experts, and one group of people in this scenario are not. They are learning a subject. They are learning how to behave and how to succeed in the world. So I think it's a mistake to conflate or compare students using AI with teachers using AI. I think that what a lot of students, particularly at a university level, are looking for from a professor is that human-to-human interaction, human feedback, human contact. They want to have a back-and-forth dialogue with their educator when they're at that academic level. And so if I wrote a paper and my professor used AI to read the paper and then grade the paper, I would obviously be very upset to know that that feels like cheating at your job as a professor. And I think cheating the student out of that human-to-human interaction, that, ostensibly, they are paying for access to these professors, they're not paying for access to an LLM. Lauren Goode: Lesson plan, yeah. Katie Drummond: But for me, when I think about AI as an efficiency tool for educators, so should a professor use AI to translate a written syllabus into a deck that they can present to the classroom for students who are maybe better visual learners than they are written learners? Obviously. That's an amazing thing to be able to do. You could create podcast versions of your curriculum so that students who have that kind of aptitude can learn through their ears. You know what I mean? There are so many different things that professors can do to create more dynamic learning experiences for students, and also to save themselves a lot of time. And none of that offends me, all of that actually, I think is a very positive and productive development for educators. Michael Calore: Yeah, I mean essentially what you're talking about is people using AI tools to do their jobs in a way that's more efficient. Katie Drummond: Right, which is sort of what the whole promise of AI in theory, in a best-case scenario, that's what we're hoping for. Lauren Goode: What it's supposed to be. Yeah. Katie Drummond: Yeah. Michael Calore: Honestly, some of these use cases that we're talking about that we agree are acceptable, are much the same way that generative AI tools are being used in the corporate world. People are using AI tools to generate decks. They're using them to generate podcasts so that they can understand things that they need to do for their job. They're using them to write emails, take meeting notes, all kinds of things that are very similar to the way that professors are using it. I would like to ask one more question before we take a break, and I want to know if we can identify some of the factors or conditions that we think have contributed to this increasing reliance on AI tools by students and professors. They feel slightly different because the use cases are slightly different. Katie Drummond: I think that Lauren had a really good point about teachers being underpaid and overworked. So I think the desire for some support via technology and some efficiency in the context of educators, I think that that makes total sense as a factor. But when I think about this big picture, I don't really think that there is a specific factor or condition here other than just the evolution of technology. The sometimes slow, but often very fast march of technological progress. And students have always used new technology to learn differently, to accelerate their ability to do schoolwork and yes, to cheat. So now AI is out there in the world, it's been commercialized, it's readily available, and they're using it. Of course they are. So I will acknowledge though that AI is an exponential leap, I think, in terms of how disruptive it is for education compared to something like a graphing calculator or Google search. But I don't think there is necessarily some new and novel factor other than the fact that the technology exists and that these are students in this generation who were raised with smartphones and smart watches and readily accessible information in the palms of their hands. And so I think for them, AI just feels like a very natural next step. And I think that's part of the disconnect. Whereas for teachers in their thirties or forties or fifties or sixties, AI feels much less natural, and therefore the idea that their students are using this technology is a much more nefarious and overwhelming phenomenon. Michael Calore: That's a great point, and I think we can talk about that forward march of technology when we come back. But for now, let's take a break. Welcome back to Uncanny Valley . So let's take a step back for a second and talk about that slow march of technology and how various technologies have shaped the classroom in our lifetimes. So the calculator first made its appearance in the 1970s. Of course, critics were up in arms. They feared that students would no longer be able to do basic math without the assistance of a little computer on their desk. The same thing happened with the internet when it really flowered and came into being in the late 90s and early 2000s. So how is this emergence of generative AI any similar or different than the arrival of any of these other technologies? Lauren Goode: I think the calculator is a false equivalence. And let me tell you, there is nothing more fun than being at a tech conference where there's a bunch of Googler PhDs when you ask this question too. And they go, "But the calculator." Everyone's so excited about the calculator, which is great, an amazing piece of technology. But I think it's normal that when new technology comes out, our minds tend to reach for these previous examples that we now understand. It's the calculator, but a calculator is different. A standard calculator is deterministic. It gives you a true answer, one plus one equals two. The way that these AI models work is that they are not deterministic. They're probabilistic. The type of AI we're talking about is also generative or originative. It produces entirely new content. A calculator doesn't do that. So I think if you sort of broadly categorize them all as new tools that are changing the world, yes, absolutely tech is a tool, but I think that generative AI, I think it's in a different category from this. I was in college in the early 2000s when people were starting to use Google, and you're sort of retrieving entirely new sets of information in a way that's different from using a calculator, but different from using ChatGPT. And I think if you were to use that as the comparison, and the question is, is skipping all of those processes that you typically learn something doing, the critical part? Does that make sense? Katie Drummond: That makes sense. And this is so interesting because when I was thinking about this question and listening to your answer, I was thinking about it more in that way of thinking about the calculator, thinking about the advent of the internet and search, comparing them to AI. Where my brain went was what skills were lost with the advent of these new technologies and which of those was real and serious and maybe which one wasn't. And so when I think about the calculator, to me that felt like a more salient example vis-a-vis AI because the advent of the calculator, are we all dumber at doing math on paper because we can use calculators? Michael Calore: Yes. Katie Drummond: For sure. Lauren Goode: Totally, one hundred percent. Katie Drummond: For sure. You think I can multiply two or three numbers? Oh no, my friend, you are so wrong. I keep tabs on my weekly running mileage, and I will use a calculator to be like, seven plus eight plus 6.25 plus five. That's how I use my calculator. So has that skill atrophied as a result of this technology being available? 100%. When I think about search and the internet, I'm not saying there hasn't been atrophy of human skill there, but that to me felt more like a widening of the aperture in terms of our access to information. But it doesn't feel like this technological phenomenon where you are losing vital brain-based skills, the way a calculator feels that way. And to me, AI feels that way. It's almost like when something is programmed or programmable, that's also where I feel like you start to lose your edge. Now that we program phone numbers into our phones, we don't know any phone numbers by heart. I know my phone number, I know my husband's phone number. I don't know anyone else's phone number. Maybe Lauren, maybe you're right. It's this false equivalence where you can't draw any meaningful conclusion from any one new piece of technology. And AI again, I think is just exponentially on this different scale in terms of disruption. But are we all bad at math? Yes, we are. Michael Calore: Yeah. Lauren Goode: Well, I guess I wonder, and I do still maintain that it's kind of a false equivalence to the calculator, but there were some teachers, I'm sure we all had them, who would say, Fine, use your calculator, bring it to class." Or, "We know you're using it at home for your homework at night, but you have to show your work." What's the version of show your work when ChatGPT is writing an entire essay for you? Michael Calore: There isn't one. Katie Drummond: Yeah, I mean, I think some professors have had students submit chat logs with their LLMs to show how they use the LLM to generate a work product, but that starts from the foundational premise that ChatGPT or AI is integrated into that classroom. I think if you're just using it to generate the paper and lying about it, you're not showing your work. But I think some professors who maybe are more at the leading edge of how we're using this technology have tried to introduce AI in a way that then allows them to keep tabs on how students are actually interacting with it. Lauren Goode: Mike, what do you think? Do you it's like the calculator or Google or anything else you can think of? Michael Calore: Well, so I started college in 1992, and then while I was at college, the web browser came around and I graduated from college in 1996. So I saw the internet come into being while I was in the halls of academia. And I actually had professors who were lamenting the fact that when they were assigning us work, we were not going to the library and using the card catalog to look up the answers to the questions that we were being asked in the various texts that were available in the library. Because all of a sudden we basically had the library in a box in our dorm rooms and we could just do it there. I think that's fantastic. Katie Drummond: Yes. Michael Calore: I think having access at your fingertips to literally the knowledge of the world is an amazing thing. Of course, the professor who had that view also thought that the Beatles ruined rock and roll and loved debating us about it after class. But I do think that when we think about using ChatGPT and whether or not it's cheating, like yes, absolutely, it's cheating if you use it in the ways that we've defined, but it's not going anywhere. And when we talk about these things becoming more prevalent in schools, our immediate instinct is like, "Okay, well how do we stop it? How do we contain it? Maybe we should ban it." But it really is not going anywhere. So I feel like there may be a missed opportunity right now to actually have conversations about how we can make academia work better for students and faculty. How are we all sitting with this? Lauren Goode: I mean, banning it isn't going to work, right? Do we agree with that? Is the toothpaste out of the tube? Katie Drummond: Yes, I think- Lauren Goode: And you could be a school district and ban it and the kids are going to go, "Haha, Haha, Ha." Michael Calore: Yeah. Katie Drummond: I mean that's a ridiculous idea to even... Lauren Goode: Right. Katie Drummond: If you run a school district out there in the United States, don't even think about it. Lauren Goode: Right. And what's challenging about the AI detection tools that some people use, they're often wrong. So I think, I don't know, I think we all have to come to some kind of agreement around what cheating is and what the intent of an educational exercise is in order to define what this new era of cheating is. So a version of that conversation that has to happen for all these different levels of society to say, "What is acceptable here? What are we getting from this? What are we learning from this? Is this bettering my experience as a participant in society?" Katie Drummond: And I think ideally from there, it's sort of, "Okay, we have the guardrails. We all agree what cheating is in this context of AI." And then it's about how do we use this technology for good? How do we use it for the benefit of teachers and the benefit of students? What is the best way forward there? And there are some really interesting thinkers out there who are already talking about this and already doing this. So Chris Ostro is a professor at the University of Colorado at Boulder, and they recommend actually teaching incoming college students about AI literacy and AI ethics. So the idea being that when students come in for their first year of college that we need to actually teach them about how and where AI should be used and where it shouldn't. When you say it out loud, you're like, "That's a very reasonable and rational idea. Obviously we should be doing that." Because I think for some students too, they're not even aware of the fact that maybe this use of AI is cheating, but this use of AI is something that their professor thinks is above board and really productive. And then there are professors who are doing, I think, really interesting things with AI in the context of education in the classroom. So they'll have AI generate an essay or an argument, and then they will have groups of students evaluate that argument, basically deconstruct it and critique it. So that's interesting to me because I think that's working a lot of those same muscles. It's the critical thinking, the analysis, the communication skills, but it's doing it in a different way than asking students to go home and write a paper or go home and write a response to that argument. The idea being, "No, don't let them do it at home because if they go home, they'll cheat." It's an interesting evolution of, I think, Lauren, to the point that you've brought up repeatedly that I think is totally right is thinking about what is the goal here, and then given that AI is now standard practice among students, how do we get to the goal in a new way? Michael Calore: Yeah, and we have to figure out what we're going to do as a society with this problem because the stakes are really, really high. We are facing a possible future where there's going to be millions of people graduating from high school and college who are possibly functionally illiterate because they never learned how to string three words together. Katie Drummond: And I have a second grader, so if we could figure this out in the next 10 years, that would be much appreciated. Lauren Goode: So she's not using generative AI at this point? Katie Drummond: Well, no, she's not. Certainly not. She gets a homework packet and she loves to come home and sit down. I mean, she's a real nerd. I love her, but she loves to come home and sit down and do her homework with her pencil. But my husband is a real AI booster. We were playing Scrabble a couple of months ago, adult Scrabble with her. She's seven, Scrabble is for ages eight and up, and she was really frustrated because we were kicking her ass, and so he let her use ChatGPT on his computer and she could actually take a photo of the Scrabble board and share her letters. Like, "These are the letters that I have, what words can I make?" And I was like, "That's cheating." And then honestly, as we kept playing, it was cool because she was discovering all of these words that she had never heard of before and so she was learning how to pronounce them. She was asking us what they meant. My thinking about it softened as I watched her using it. But no, it's not something that is part of her day to day. She loves doing her homework and I want her to love doing her homework until high school when she'll start cheating like her mother. Michael Calore: This is actually a really good segue into the last thing that I want to talk about before we take another break, which is the things that we can do in order to make these tools more useful in the classroom. So thought exercise, if you ran a major university or if you're in the Department of Education before you lose your job, what would you be doing over your summer break coming up in order to get your institutions under your stead ready for the fall semester? Katie Drummond: I love this question. I have a roadmap. I'm ready. I love this idea of AI ethics, so I would be scouring my network, I would be hiring a professor to teach that entry level AI ethics class, and then I would be asking each of my department heads because every realm of education within a given college is very different. If you have someone who runs the math department, they need to think about AI very differently than whoever runs the English department. So I would be asking each of my department leads to write AI guidelines for their faculty and their teachers. You can tell I'm very excited about my roadmap. Michael Calore: Oh yes. Katie Drummond: I would then review all of those guidelines by department, sign off on them, and also make sure that they laddered up to a big picture, institutional point of view on AI. Because obviously it's important that everyone is marching to the beat of the same drum, that you don't have sort of wildly divergent points of view within one given institution. Lauren Goode: What do you think your high level policy on AI would be right now if you had to say? Katie Drummond: I think it would really be that so much of this is about communication between teachers and students, that teachers need to be very clear with students about what is and is not acceptable, what is cheating, what is not cheating, and then they need to design a curriculum that incorporates more, I would say, AI friendly assignments and work products into their education plan. Because again, what I keep coming back to is, you can't send a student home with an essay assignment anymore. Lauren Goode: No, you can't. Katie Drummond: You can't do that. So it comes down to, what are you to do instead? Lauren Goode: I like it. Katie Drummond: Thank you. What would you do? Lauren Goode: I would enroll at Drummond. Drummond, that actually sounds like a college. Where did you go to school? Drummond. Michael Calore: It does. Lauren Goode: Well, I was going to say something else, but Katie, now that you said you might be hiring an ethics professor, I think I'm going to apply for that job, and I have this idea for what I would do as an ethics professor teaching AI to students right now. On the first day of class, I would bring in a couple groups of students. Group A would have to write an essay there on the spot and group B presumably were doing it, but actually they weren't. They were just stealing group A's work and repurposing it as their own. And I haven't quite figured out all the mechanics of this yet, but basically I would use as an example for here's what it feels like when you use ChatGPT to generate an essay because you're stealing some unknown person's work, essentially cut up into bits and pieces and repurposing it as your own. Katie Drummond: Very intense, Lauren. Lauren Goode: I would start off the classroom fighting with each other, basically. Katie Drummond: Seriously? Michael Calore: It's good illustration. I would say that if I was running a university, I would create a disciplinary balance in the curriculum across all of the departments. You want to make sure that people have a good multi-disciplinary view of whatever it is that they're studying. So what I mean is that some percentage of your grade is based on an oral exam or a discussion group or a blue book essay, and some other percentage is based on research papers and tests and other kinds of traditional coursework. So, I think there has to be some part of your final grade that are things that you cannot use AI for. Learning how to communicate, how to work in teams, sitting in a circle and talking through diverse viewpoints in order to understand an issue or solve a problem from multiple different angles. This is how part of my college education worked, and in those courses where we did that, where one third of our grade was based on a discussion group, it was one class during the week was devoted to sitting around and talking. I learned so much in those classes, and not only about other people, but also about the material. The discussions that we had about the material were not places that my brain would've normally gone. So yeah, that's what I would do. I think that's the thing that we would be losing if we all just continued to type into chatbots all the time. There are brilliant minds out there that need to be unleashed, and the only way to unleash them is to not have them staring at a screen. Lauren Goode: Mike's solution is touch some grass. I'm here for it. Michael Calore: Sit in a circle, everybody. Okay, let's take one more break and then we'll come right back. Welcome back to Uncanny Valley . Thank you both for a great conversation about AI and school and cheating, and thank you for sharing your stories. Before we go, we have to do real quick recommendations. Lightning round. Lauren, what is your recommendation? Lauren Goode: Ooh. I recommended flowers last time, so... Katie Drummond: We are going from strength to strength here at Uncanny Valley . Lauren Goode: My recommendation for flowers has not changed for what it's worth. Hood River, Oregon. That's my recommendation. Michael Calore: That's your recommendation. Did you go there recently? Lauren Goode: Yeah, I did. I went to Hood River recently and I had a blast. It's right on the Columbia River. It's a beautiful area. I you are a Twilight fan, it turns out that the first Twilight movie, much of it was filmed right where we were. We happened to watch Twilight during that time just for kicks. Forgot how bad that movie was, but every time the River Valley showed up on screen, we shouted, "Gorge." Because we were in the gorge. I loved Hood River. It was lovely. Michael Calore: That's pretty good. Katie? Katie Drummond: My recommendation is very specific and very strange. It is a 2003 film called What a Girl Wants, starring Amanda Bynes and Colin Firth. Michael Calore: Wow. Katie Drummond: I watched this movie in high school, where I was cheating on my math exams. Sorry. For some reason, just the memory of me cheating on my high school math exams makes me laugh, and then I rewatched it with my daughter this weekend, and it's so bad and so ludicrous and just so fabulous. Colin Firth is a babe. Amanda Bynes is amazing, and I wish her the best. And it's a very fun, stupid movie if you want to just disconnect your brain and learn about the story of a seventeen-year-old girl who goes to the United Kingdom to meet the father she never knew. Michael Calore: Wow. Lauren Goode: Wow. Katie Drummond: Thank you. It's really good. Lauren Goode: I can't decide if you're saying it's good or it's terrible. Katie Drummond: It's both. You know what I mean? Lauren Goode: It's some combination of both. Katie Drummond: It's so bad. She falls in love with a bad boy with a motorcycle, but a heart of gold who also happens to sing in the band that plays in UK Parliament, so he just happens to be around all the time. He has spiky hair. Remember 2003? All the guys had gel, spiky hair. Lauren Goode: Yes, I still remember that. Early 2000s movies, boy, did they not age well. Katie Drummond: This one though, aged like a fine wine. Michael Calore: That's great. Katie Drummond: It's excellent. Lauren Goode: It's great. Katie Drummond: Mike, what do you recommend? Lauren Goode: Yeah. Michael Calore: Can I go the exact opposite? Katie Drummond: Please, someone. Yeah. Michael Calore: I'm going to go literary. Katie Drummond: Okay. Michael Calore: And I'm going to recommend a novel that I read recently that it just shook me to my core. It's by Elena Ferrante, and it is called The Days of Abandonment. It's a novel written in Italian, translated into English and many other languages by the great pseudonymous novelist, Elena Ferrante. And it is about a woman who wakes up one day and finds out that her husband is leaving her and she doesn't know why and she doesn't know where he's going or who he's going with, but he just disappears from her life and she goes through it. She accidentally locks herself in her apartment. She has two children that she is now all of a sudden trying to take care of, but somehow neglecting because she's- Katie Drummond: This is terrible. Michael Calore: But it's the way that it's written is really good. It is a really heavy book. It's rough, it's really rough subject matter wise, but the writing is just incredible, and it's not a long book, so you don't have to sit and suffer with her for a great deal of time. I won't spoil anything, but I will say that there is some resolution in it. It's not a straight trip down to hell. It is a, really, just lovely observation of how human beings process grief and how human beings deal with crises, and I really loved it. Katie Drummond: Wow. Michael Calore: I kind of want to read it again, even though it was difficult to get through the first time. Katie Drummond: Just a reminder to everyone, Mike was the one who didn't cheat in high school or college, which that totally tracks from the beginning of the episode to the end. Michael Calore: Thank you for the reminder. Katie Drummond: Yeah. Michael Calore: All right, well, thank you for those recommendations. Those were great, and thank you all for listening to Uncanny Valley . If you liked what you heard today, make sure to follow our show and to rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, write to us at uncannyvalley@ We're going to be taking a break next week, but we will be back the week after that. Today's show is produced by Adriana Tapia and Kiana Mogadam. Greg Obis mixed this episode. Jake Loomis was our New York studio engineer, Daniel Roman fact-checked this episode. Jordan Bell is our executive producer. Katie Drummond is WIRED's global editorial director, and Chris Bannon is the head of Global Audio.

Is Elon Musk Really Stepping Back From DOGE?
Is Elon Musk Really Stepping Back From DOGE?

WIRED

time17-05-2025

  • Business
  • WIRED

Is Elon Musk Really Stepping Back From DOGE?

Elon Musk is apparently turning his attention away from Washington and back to Tesla. On this episode of Uncanny Valley , the hosts unpack what Musk's pivot means for the future of DOGE. Elon Musk arrives for a town hall meeting wearing a cheesehead hat at the KI Convention Center on March 30 in Green Bay, Wisconsin. Photo-Illustration: WIRED Staff; Photograph:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Elon Musk says he's stepping back from his role with the so-called Department of Government Efficiency to turn his attention to his businesses—most urgently to Tesla, which has faced global sales slumps in recent months. In this episode, we discuss how our understanding of DOGE has evolved over the past five months and what we think will happen when Musk scales back. You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: Hey, this is Mike. Before we start, I want to take the chance to remind you that we want to hear from you. Do you have a tech related question that's been on your mind, or maybe you have a topic that you wish we talk about on the show? If so, you can write to us at uncannyvalley@ and if you listen to and enjoy our episodes, please rate it and leave your review on your podcast app of choice. It really helps other people find us. Hi folks, co-hosts. How's it going? Katie Drummond: Ugh. Michael Calore: That good? Katie Drummond: That was me, Katie. That was me speaking. No, it's going all right. It's been a stressful 90 minutes leading up to recording this podcast, but I'm okay. Michael Calore: Did you just fly through Newark? Katie Drummond: No, actually I didn't. Although I know that that is in your cards, in the near future. I actually rescheduled a flight to avoid Newark, so I'm now taking a red eye for no reason other than I don't want to fly into Newark Airport. Lauren Goode: Smart. Katie Drummond: Thank you. Michael Calore: I'm jealous. Lauren Goode: Mike, I'm sending you all of the good wishes. Michael Calore: Thank you. I hope to listen to this podcast on an airplane that took off on time and lands on time without incident on Thursday. Lauren Goode: I hope you return next week able to tape another podcast because you didn't get stuck somewhere. Michael Calore: I think metaphysically, we're all stuck somewhere right now, I think. Lauren Goode: Yeah, we're in the middle of some big transitions. That's probably the one thing that we have in common with Elon Musk. Katie Drummond: Touché. Michael Calore: Back in the first week of January, we put out an episode of this show that was all about DOGE, the so-called Department of Government Efficiency. I would say it was our very first DOGE episode, if I'm remembering correctly. And we talked about the key players, the goals of the group, and the ins and outs of government spending. A lot has happened since then. And now, Elon Musk, says that he's stepping back from his full-time role at DOGE. There are still many unanswered questions about where DOGE stands now, including if and when Elon's exit will happen, but we're wondering what actually has been accomplished during Musk's time with the DOGE Bros. So, today in the show, the latest on DOGE and what it may look like post-Elon. This is WIRED's Uncanny Valley , a show about the people, power, and influence of Silicon Valley. I'm Michael Calore, Director of Consumer Tech and Culture here at WIRED. Lauren Goode: I'm Lauren Goode, I'm a Senior Writer at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's Global Editorial Director. Michael Calore: So, I want to start by asking a question that we asked in our last deep dive on DOGE, because I think the answer may have changed since then. At this moment, just a few months into Trump's second term as President, May 2025, what exactly is DOGE? Lauren Goode: Well, I wish it was a figment of our imagination. Katie Drummond: Yes, I wish that it was a fever dream, but that is still the big question, incredibly enough. And I think at WIRED, we've actually been very careful when we characterize DOGE in our reporting, we often, or always, use the term, "so-called." The so-called Department of Government Efficiency, because it doesn't really actually exist. And as some WIRED reporters pointed out last month, I think it was Zoë and Kate, it's almost a metaphysical question at this point. And that was in relation to employees at the General Services Administration, despite the fact that there are at least half a dozen DOGE operatives on payroll at that administration, despite the fact that there is a section of that building that is for DOGE use only and is a secure facility within the GSA, that the acting head of the GSA actually said, in an all-hands, that there was no DOGE team working at the GSA. Which begs the question, well, who are these people then and who do they work for? I think in a more practical way, there are two DOGEs. There's US Digital Service, which was essentially hijacked and repurposed by the administration, now known as the US DOGE Service. Sure. And then there's a temporary organization within the US DOGE service, called, obviously, the US DOGE Service Temporary Organization. And that organization is ostensibly in charge of carrying out the DOGE agenda. So, I think all of this semantic BS aside, what is DOGE? Well, it is the brainchild of Elon Musk. It is something that the president got on board with very early, and DOGE is effectively a collection of typically young, I think almost always male, technologists who come from companies that Musk and Peter Thiel do run or have run. Despite what the acting head of GSA says, there is a DOGE, and it is made up of these dozens and dozens of technologists who are working inside all of these different agencies. That is what DOGE is, whether it's a real department or agency or not, that's what it is. And we have a pretty good sense now, in May, of what they're actually doing. Michael Calore: And it's important to note that they did make a number of hires, dozens and dozens of people who they hired to be a part of DOGE, who are now installed in various agencies around the federal government. Lauren Goode: And a lot more layoffs too. Michael Calore: Yeah. Well, we have been doing a lot of reporting on DOGE. As Katie, as you just mentioned, WIRED has been on top of the story ever since the beginning, because we know Elon and we know his playbook. So, what are some of the stories that WIRED has done over the last few months on DOGE that have just totally blown your mind? Katie Drummond: Wow. There are a lot. I think the reporting that we have done around what DOGE is doing using AI and using all of the data that they've been able to access to actually surveil immigrants, I think that that reporting is incredibly disturbing. I think it is beyond the worst fears of folks in late January, early February as DOGE's work was getting underway, the idea that this kind of thing could happen and that it could happen so quickly, it certainly was talked about. It was speculated in terms of what do you think they're going to do? What are they after? There were a lot of hypotheses at the time. I don't think anyone anticipated that we would see that kind of work happen so quickly and in such a dystopian way. And then, I think, it hasn't blown my mind, but I really like the coverage that we've done around how recruiting for DOGE happens. And we just published another story on this recently, I think it was a couple of weeks ago. It was in early May, from Caroline Haskins and Tori Elliot, that was about another round of recruiting that's happening for DOGE. And this recruiting always seems to happen in these Slack groups for alumni of various tech companies, this time it was Palantir, and this guy, this entrepreneur, went into the Slack room and basically said, "Hey, I'm looking for people who would be excited to design and deploy AI agents who could free up at least 70,000 full-time government workers over the next year." And in the way he phrased it, he was saying, "These agents could free up these 70,000 people for," quote, "higher impact work." Which begs the question, higher impact work in the private sector after you fire all of them? Exactly what is the plan? And that story was really interesting to me because of how, first of all, I think how the recruiting happens is really interesting. I think the fact that it's happening, they're specifically targeting alums from certain companies, that this is happening in Slack groups and message boards. I think that's interesting. But I thought that the way that message was received was fascinating, given that we're now in May. And so, people have seen DOGE play out over the last few months. We wrote, "Eight people reacted with clown face emojis, three reacted with a custom emoji of a man licking a boot. Two reacted with a custom emoji of Joaquin Phoenix giving a thumbs down in the movie Gladiator. And three reacted with a custom emoji with the word 'fascist.'" So, it was just interesting to me to note that alums of a company like Palantir are looking at that message, and at least some of them are saying, like, "Nah, I see what you're doing here. And this is not only not compelling to me as a recruitment effort, but actually fascist." Lauren Goode: Now, I should mention that I happen to have been on a short book leave at the start of this year— Katie Drummond: Good timing. Lauren Goode: When ... Great timing. Katie knows I came back, and I was lamenting to her via our Slack, like, "Katie, I'm literally never taking leave again because so much happened." And starting in late January, I started to see WIRED's incredible reporting, watching it from afar and seeing all this news come out about DOGE, and just was like, "What is happening?" And one of the things that stood out to me almost immediately was this juxtaposition of cuts to the federal workforce and also cuts to federal spending, like the $1 limit that was placed on federal employees credit cards— Michael Calore: Oh, gosh. Lauren Goode: And how much this limited their ability to do their job, like running out of toilet paper, running out of printer paper, not being able to just do office functions as a federal employee, juxtaposed with Trump's incredibly lavish candlelight dinners and the crypto scheme we talked about last week, and all of the ways in which it seems like there are members of this administration who are simply lining their pockets as they have dispatched DOGE to make all of these cuts. If you just step back from that, it's hard to see, at this point, how this benefits America. What has actually happened here? Michael Calore: I think probably my favorite story is one of our most recent ones about the Library of Congress, and how two gentlemen showed up to the Library of Congress and said, "Hi, we work here. You need to let us in." Capitol Police said, "No. Who are you? Can you identify yourselves?" And they showed him a note from DOGE saying that they worked there and that they should let them in. And the Capitol Police turned them away. And it turns out they did actually work there. They had a note from Daddy. Lauren Goode: Please never call him that again. Katie Drummond: Oh, boy. Michael Calore: So, back when we first started talking about DOGE, at the beginning of the year, it was actually two people. It was Elon Musk and Vivek Ramaswamy. I think a week after we published that episode, Vivek was out. Lauren Goode: Has anyone heard from Vivek? Katie Drummond: I don't think about him. I don't know him. I don't know that man. No. Isn't he running for governor? Lauren Goode: I was going to say he's running for governor of Ohio. Wasn't that the plan? I like how we're all Googling this. Katie Drummond: He's pivoted. Michael Calore: Well, it's important to think about who's running it now, because Elon says he's only going to be around one to two days a week. He says he will continue to do work for DOGE and for President Trump until the end of Trump's term, whatever year that may be. He's going to be scaling back. He's going to go on 20% time, basically. So, who are the people who are still there? Who are the people? Who are the names that we now need to know? Lauren Goode: I think AI agents are going to be running all of it. Katie Drummond: Well, obviously they're apparently replacing 70,000 federal workers with them within the year. Obviously, there are some very high-profile members of DOGE after just a few short months. There's Edward "Big Balls" Coristine, this 19-year-old appointed by Musk who owns LLC. I'm sure everyone is familiar with Big Balls at this point. There are plenty of other young inexperienced engineers working across these agencies, and then there are the adults in the room. There are people like Steve Davis, who is one of Musk's, really, right-hand men who works closely alongside him at a number of his companies, and has been working with him in the federal government. And we also, of course, know that they are still actively recruiting, again, largely from companies that Musk himself own. So, I think that the whole point of all of this is that, yes, Elon Musk is scaling back. So, let's say he scales back, let's say he decides to part ways with DOGE and the administration altogether. DOGE is already embedded in the federal government. He accomplished what he set out to do, in so far as we now have DOGE team members, DOGE operatives at dozens and dozens and dozens of federal agencies. They very clearly have their marching orders, they're carrying out work. So, at this point, you can't claw that all back, and that doesn't leave the federal government just because Elon Musk potentially leaves the government. The damage is done. I do think it's important to note here, and I know this will come up over and over because I'm going to keep bringing it up. Elon Musk at two days a week, is a lot of Elon Musk. 20% of Elon Musk's time going to the federal government, sure, he won't be in the weeds seven days a week, 24 hours a day, but that's a lot of Musk time. So, I do think it's important to be cautious, and I just say this to all of our listeners and to everyone out there, this idea that Musk is disappearing from the federal government or disappearing from DOGE, the administration might want you to think that that's what's happening. I suspect that that is not at all what's happening. That said, from all appearances, Elon Musk might be less involved in DOGE, but DOGE is going to keep on keeping on. Michael Calore: And while it's trucking, what is Elon going to be doing? What does he say? Lauren Goode: Yeah, what is he going to be doing? Katie, do you have a sense of how much of this is related to the fact that Tesla isn't doing so well right now? Katie Drummond: Well, I suspect that that's a big factor, but I think so much of the narrative externally, and even people at Condé Nast who have come up to me to be like, "Elon, he's out. Is it Tesla? Why is he leaving DOGE?" This is optics. This is narrative. His company is in the tubes, it is really struggling. They needed a way to change that story, and they needed a way to change that story very quickly. The best way that they could change that story was to say, "No, no, no, no, no. Don't worry. Elon Musk is not all in on DOGE and the federal government. He is going to be stepping back and he's going to be focusing on his other companies." Even just Trump saying that, Musk saying that, that being the narrative that plays out in the media is incredibly helpful for Musk, particularly in the context of Tesla, and just the board, and shareholders, and their confidence in his ability to bring this company back from the brink. So, do I think that he's pulling back and will be spending less time with DOGE? Yes. Do I think a lot of this was just smoke, and mirrors, and optics, and narrative and PR? Yes, it was incredibly well-timed right as Tesla was really, really, really in the tubes and getting a ton of bad press. Elon Musk makes this very convenient announcement, right? Lauren Goode: Mm-hmm. Right. And this is something that the venture capitalist and Musk's fellow South African, David Sacks, has said, "It's just what Musk does." He said he has these intense bursts where he focuses on something, gets the right people and the structure in place, feels like he understands something, and then he can delegate. And he's just reached that point with DOGE. He's in delegation mode. Katie Drummond: Yes, it seems like he has all the right people in place, and a structure that is so clear and transparent to the American people, that it's time for him to move on. Michael Calore: And I do think that he is going to have to figure out the Tesla situation. As you said, the company's really struggling, and there are a lot of reasons for that. There are no new Tesla models for people to buy, even though they were promised. There have been a bunch of recalls. People are just hesitant about buying a new EV right now anyway, for a number of reasons. But it's really, it's him that people don't like. So much like the damage that he has done to the structure of the federal government with DOGE, similarly, he has done damage to Tesla, the brand, by his association with the policies of the Trump Administration, and his cozying up to the President, and his firing, and destroying the rights of people. Katie Drummond: And isn't it also true that all of these problems with Tesla, all of the problems, aside from Elon Musk himself, those problems were happening or were poised to happen regardless, like issues with new models, with recalls, that all predates his work with DOGE, unless I'm drastically misunderstanding how time works. So, those problems with the company existed and were bound to become a bigger deal at some point, and then it really feels like his work with DOGE and the federal government just added fuel to the fire. He just poured gasoline on all of his company's problems by participating with the Trump Administration in the way that he did. But the fact that Tesla is a troubled company is old news, and has nothing to do with the fact that Elon Musk is not a well-liked individual. So, it's just problem on top of problem. Michael Calore: That's right. That's right. And the damage is done, I think, at this point. He would probably have to move on from that company in order to fully turn it around. Katie Drummond: Well, we still have a lot of time left in the year, so we'll see. Michael Calore: All right, well let's take a break and we'll come right back. Welcome back to Uncanny Valley . When we talked about DOGE at the beginning of the year, it still felt just like an idea. The tone was decidedly different. We talked about how the group was named after a meme coin, and we all had a good laugh at the absurdity of it all. It was still unclear what would happen. And of course, since then, DOGE has gutted multiple federal agencies, dismantled so many programs, fired a bunch of people, built a giant databases to track and surveil people, among other things. Katie Drummond: So, I wasn't actually with you guys on the show when you talked about DOGE in January, but I was listening to the show, and I remember you talking about Musk's plans to, quote, "open up the books and crunch the numbers to cut costs." Sounds very exciting. And cutting some of those costs, of course, had to do with laying people off. Now, I remember that because Zoë Schiffer, who hosts the other episode of Uncanny Valley , said she would be surprised if any, quote, "books were even opened." So, what did we see actually happen from that prediction to now, from January to May? Lauren Goode: I want to give Zoë a shout-out here because I think the context of that was me saying, "Oh, I wonder how they're going to go about this careful, methodical process of doing the thing." And so he was like, "This is going to be utter chaos. They're not going to open any books." Katie Drummond: She was right. It has been chaos. Lauren Goode: So we also said that the New Yorker reported Vivek had joked at one point that he was going to do a numbers game. You would lose your job if you had the wrong Social Security number. That didn't actually happen, but Zoë surmised at the time that this was potentially going to be run off of the Twitter/X playbook, run like a chaotic startup. And that's true. I definitely did think there would be more of a process to what DOGE was doing, so I was wrong. There was process. They have systematically terminated leases for federal office buildings, or taken over other buildings. They're reportedly building out this big master database. They've gutted public agencies like the CDC, and regulatory bodies like the CFPB, the Consumer Financial Protection Bureau. So they've done a lot. I think the part where I thought there would be more process was around the people, the human capital of all this, like the federal workforce. And so, maybe in a lot of ways, this is just like some startup, you're acting recklessly and worrying about the human beings you're affecting later. Michael Calore: And I think the thing that we also predicted correctly was that if DOGE has a chance to shape the regulatory agencies in the federal government, they would shape those agencies in a way that benefit people who are in their industry. Lauren Goode: Right. Katie Drummond: I think one of the questions you guys were asking back in January was whether or not the administration was bringing in these guys. It was Musk and Ramaswamy at the time, because they actually wanted them to advise on how technology is used as part of government services, as part of the way the government works, or because they thought the two would be influential over the types of regulations that are rolled back or introduced. So, man, it's crazy to even say all of that, knowing what we know now about ... It's just interesting, in January, we knew so little, we were so naive. But what do you think now about why Musk, in particular, was actually brought on board? Lauren Goode: Well, honestly, I think that they have done both. WIRED has reported that DOGE is building out a master database of sensitive information about private citizens, and a database that will reportedly help them track immigrants. And we know they're playing around with these AI agents, like you just talked about, Katie. And so, we know that they were brought in to apply that technology building mindset to government services, if you want to call it that. But I think that they also are influencing policy, because on the policy side, we've seen, I mentioned David Sacks, he's Trump's crypto and AI Czar, and he's been weighing in on cryptocurrency and stablecoin regulations. Even if that hasn't been pushed through yet, he's certainly in Trump's ear about it. Musk has also been pushing back on Trump's tariff policies. Musk has been expressing his opinion on immigration policies. Those are just a few examples, but safe to say, he has Trump's ear. Michael Calore: I think at the beginning I was cautiously interested in the IT consultant part of it, like the DOGE mission to come in and modernize the federal government. Obviously, if you've ever dealt with federal government agencies, as a person who's computer-literate, sometimes you are just completely flabbergasted by the tools that you have to use to get access to services in this country. So yes, guys, come in, do your thing, zhuzh it up, make it work better. Of course, that is absolutely not what happened. But I was excited about the prospect of that maybe happening. And it turns out that they really took the opportunity to take all of the data that are in all of these agencies and put it all together into one giant input, fed into various systems that are going to process that data and find efficiencies in ways that are probably going to affect human beings negatively. A computer is really good at doing very simple tasks over and over again. It doesn't necessarily understand the nuances of how things are divided up equitably among different sectors of society, it doesn't understand the nuances of people's personal situations. So, that's the modernization that we're going to see, I think, of government systems. And that's frightening, that wasn't what I was expecting. Katie Drummond: Now, we've talked a little bit on and off in this episode already about AI. AI has played a much bigger role with DOGE than maybe we thought it would, maybe we hoped it would, in January. So, let's talk about that. As far as we know now, what does DOGE aspire to do with AI, and how were you thinking about that in January, if you were thinking about it at all? Lauren Goode: I still feel like I don't really understand what they're trying to do with AI, frankly. Katie Drummond: Maybe they don't. Lauren Goode: We know at this point that there are AI officers and leaders in the federal government. We mentioned David Sacks before, who was put in charge of crypto and AI. There is now the first ever AI officer at the FDA, Jeremy Walsh. WIRED has reported that OpenAI and the FDA are collaborating for an AI assisted scientific review of products. Our colleague, Brian Barrett, has written about the use of AI agents. In particular, Brian wrote, "It's like asking a toddler to operate heavy machinery." Social Security Administration has been asked to incorporate an AI chatbot into their jobs. And we've also reported on how the GSA, the General Services Administration has launched something called the GSAI bot. But we also later found out that that's something that was based on an existing code base, a project that existed prior to DOGE taking over the building. I think the short answer is that when DOGE first started, we didn't really have a clear sense of how they were going to use AI. And even right now, after saying all that on this podcast, I cannot pretend to understand fully what they are doing with AI. And that's either due to a lack of transparency, or just the fact that it all seems very disparate, very scattered. I'm not going to sit here on this podcast and pretend to make sense of it. Michael Calore: With a lot of this stuff, it's hard to understand where the DOGE initiatives end, and where just other initiatives in the federal government begin. I think simply because there's a lack of transparency about how these decisions are being made, who's advising who, and who's really drafting the memos. When we think about what is AI going to do, we have to consider what an AI agent is. It is a program that can do the same work as a human being. And that's just the broad definition of it. So, you can deploy an AI agent to write emails, make phone calls, fill out paperwork, whatever it is. You're just basically doing admin work, and there is a lot of admins in the federal government, and I think that that is in our future. People have this cozy idea that their experience with AI is maybe ChatGPT or Siri, or something like that. So, "Oh, you have a problem with your taxes, you can just talk to the IRS chatbot and it'll solve it for you." That sounds like a nightmare. I can't imagine that any IRS chatbot is going to be able to solve any problems for me. It'll probably just make me mad and make the problems worse or the same. But when you think about, "Okay, here is an opportunity for us to use these AI agents in a way that will increase efficiency across the government," what you're really talking about is just we don't need these people anymore and we just need to replace them with the technology. Katie Drummond: One of the pieces of this that I think is so consequential, I remember maybe a year and a half ago, talking to a bunch of civil servants, people in decision-making roles across federal agencies, and they were all asking a lot of questions about AI. They were very curious about AI. The Biden Administration executive order had put forth all of these different demands of different agencies to investigate the potential for AI to do X, Y, or Z within their agencies. So they were in that exploratory process. They were very slow to think about how AI could be useful within those agencies, and that's for the bureaucracy reasons, but it's also because the work of these federal agencies, you don't really want to get it wrong. When we're talking about the IRS or we're talking about payments from treasury, we're talking about evaluating new drugs via the FDA, you want to be right. You want to reduce the risk of error as much as possible. And I think for so many people in technology, there's this notion that technology outdoes human performance just inevitably. It's inevitable that a system will do a better job than a human being who is fallible, who makes mistakes. That said, what we know about AI so far, generative AI in particular, is that it makes a lot of mistakes. This is very imperfect technology. AI agents are not even really ready for primetime within a private company for one individual to use in their own home, let alone inside the federal bureaucracy. So, I do think that a lot of what DOGE has done with AI, like Lauren, to your point about them building on top of this existing AI initiative at the GSA, is they're taking very preliminary work in AI at these agencies, and they're just fast tracking it. They're saying, "This is going to take three years. No, no, we're doing this in three weeks." And that's scary, given what we know about AI and how effective and how reliable it is right now. So, does anything stand out to you guys about that in the context of what we're talking about around AI and DOGE, and AI in the federal government? What are some of the risks that really stand out to you guys? Lauren Goode: I think that it is consequential when you think about AI being used in such a way that it ends up impacting people's jobs, right? Katie Drummond: Right. Lauren Goode: But I actually think that that idea of AI agents doing the jobs of humans at this point is a little bit optimistic. And when I think about what feels more consequential, is this idea of AI just becoming a code word or a buzzword for what is essentially very, very, very advanced search. So, if they are able to build this master database that creates some sort of profile of every US citizen, or every US non-citizen, and is pulling in from all these different data sources, both within government agencies, but public documents, and across the web and across social media, and anything you've ever tweeted, and anything you've ever said, and anything you've ever done, and if you've ever gotten a parking ticket or a DUI, or you've committed a crime, or anything like that, to just hoover that all into one centralized location and be able to pull that up in a citizen of the drop of a hat, that, to me, feels more consequential and potentially more dangerous than going to the Social Security website and having an annoying bot trying to answer your questions for you. Michael Calore: It's surveillance creep, really is what it is. And marry that with computer vision, like face recognition and the ability to photograph everybody who's in a car at the border, cross-reference that with government documentation like passports and driver's licenses, and you have a whole new level of surveillance that we have not dealt with before in our society. Katie Drummond: Now, not to be all negative Nelly, because we often are, but does any ... What? Michael Calore: What show are you on? Katie Drummond: You know me, the Canadian. Does anything stand out to both of you as having actually been good from all of this? So, DOGE takeover January to May, anything potentially exciting? Any bright spots, anything where we should be a little bit more generous in our assessment and say, "You know what, actually, as dystopian and scary as a lot of this is, this potentially a good thing, or this is unequivocally a good thing"? Anything like that that stands out to either of you? Lauren Goode: I would say that if there's one area where we could be a little bit more generous, it might be that if this turnaround of the federal government was something that was being done in good faith, then I might give them a pass after just five months. I might say ... Katie, you've done turnarounds before? Katie Drummond: I have. Lauren Goode: They take longer than five months, right? Katie Drummond: They do. Lauren Goode: Yes. Okay. Katie Drummond: Depends on the size of the organization. With the federal government, you're looking at five to 10 years. Lauren Goode: Right. Exactly. So there's that. In terms of the actual cuts to fraud and abuse as promised, as far as we know and has been reported by other outlets, the actual cuts that DOGE has made fall far below what Trump and Musk had promised. Initially, they said that they were going to slash $2 trillion from the federal budget. That goal was cut in half almost immediately. The latest claims are that 160 billion has been saved through firing federal workers, canceling contracts, selling off the buildings, other things. And PR just reported that the tracker on DOGE's own website is rife with errors and inaccuracies, though. The wall of receipts that DOGE has been posting totals just $63 billion in reductions, and actually, as of late March, government spending was up 10% from a year earlier. Revenue was still low. So, we're still in a deficit, in terms of federal spending. There is one thing I've heard from folks in Silicon Valley they think is a good thing. It's Musk's pushback on some of Trump's immigration policies, specifically those that affect high-tech workers. During Trump 1.0, the denial rates for H-1B visa spiked, and Trump said he wanted to end, forever, the use of H-1 B visas, he called it a cheap labor program. Now, he has flip-flopped a bit. Stephen Miller, his Homeland Security Advisor, Deputy Chief of Staff, has been pushing for more restrictions on this worker visa. But Musk, who actually understands how critical this visa is for the talent pipeline in Silicon Valley, maybe because he's an immigrant, I think has managed to sway Trump a bit on that. And so, for obvious reasons, perhaps people in Silicon Valley say, "Well, I think this is actually a good thing that Musk is doing." Michael Calore: I'll point out two things. Lauren Goode: Go ahead. Michael Calore: One, the LOLs. The press conference that they did in the Oval Office where Elon brought his child— Katie Drummond: Oh, that was good. Michael Calore: That was definitely a big highlight for me. But seriously, the other thing is that people are really engaged now. You talk to people who are somewhat politically minded, and they have opinions about government spending, they have opinions about oversight and transparency, they have opinions about what actually matters to them. Like what do they need from their government, what do they want their government to do for them. Those were all nebulous concepts even five, six months ago that I think are at the top of everybody's mind now. And I think that is a good thing. Katie Drummond: Oh, I love that. A galvanized and engaged public— Michael Calore: That's right. Katie Drummond: As a plus side to DOGE. I love it. We're going to take a quick break and we'll be right back. Michael Calore: Welcome back to Uncanny Valley . Before we wrap up, let's give the people something to think about, our recommendations. Katie, why don't you go first? Katie Drummond: I have an extremely specific recommendation. Do either of you use TikTok? Lauren Goode: I do sometimes. Michael Calore: Define use. Katie Drummond: Scroll. Lauren Goode: Yeah, scroll maybe like once every couple weeks. Katie Drummond: Do you thumb through TikTok? Michael Calore: I'm familiar with it, yes. Katie Drummond: There is an account on TikTok called Amalfi Private Jets. It is the account of a private jet company. This is the most genius marketing I have ever seen in my life. For someone who likes reality TV and trash, which is me. It's these little 60-second reality TV episodes, where the CEO of Amalfi Private Jets is on the phone or he's on a Zoom with one of his clients, often, I think her name is McKenna. She's a young, extremely wealthy, entitled little brat, and she'll call him up in the clip, he's at his office. He's young and handsome, and he's like, "Hey, McKenna." And she's like, "Hey, Colin. So, my dad said that I had to fly from Geneva to London," and blah, blah, blah. And then there's this whole dramatic narrative around McKenna and why she needs a $75,000 jet immediately, and she needs it to have vegan spinach wraps refrigerated. It's just these very dramatic little vignettes of what life is like for the rich and fabulous who are calling Amalfi Private Jets to book their private jets. So there's that account. And then, once you go down the rabbit hole of that account, the TikTok algorithm will start serving up these companion accounts they've created, like the CEO of the company has one, his girlfriend has one. I think McKenna now has one. And so, there's this little cinematic universe of Amalfi Private Jets on TikTok, and you get sucked in, and you get to know all of these people. And it's a little vertical video reality show experience that I highly recommend if you only have 60 seconds, which then turn into two hours, which then turn into pulling an all-nighter to learn everything about Amalfi Private Jets, their CEO, his girlfriend, and their wealthy clientele. This is the TikTok for you. Enjoy. Michael Calore: This is genius. Katie Drummond: Thank you. Lauren Goode: This is the reality TV of the future. Katie Drummond: It's incredible. Lauren Goode: It has arrived. Katie Drummond: And you know what? And I just did their job for them, because it's marketing for their company. They got me. Michael Calore: All right, Lauren, what's your recommendation? Lauren Goode: My recommendation might go nicely on your Amalfi Private Jet. Hear me out, peonies. You guys like flowers? Michael Calore: Oh, peonies. Lauren Goode: Peonies. Katie Drummond: I like flowers. Michael Calore: Sure. Lauren Goode: Do you like peonies? Katie Drummond: I couldn't tell one from another, but I like them. Lauren Goode: They're beautiful. It's peony season here. I'm saying that now with the O annunciated, which is how I would do if I was giving my architectural digest home tour. Michael Calore: I see. Lauren Goode: Yes, these are peonies. Katie Drummond: Oh, I'm just looking at Google images of them. They're very nice. Lauren Goode: Aren't they beautiful? Katie Drummond: They're very nice. Lauren Goode: The cool thing is they do have a very short-lived season. In this part of the world, it's typically late May through June. If you plant them, they only bloom for a short period of time. If you buy them, they're these closed balls, not to be confused with Edward Coristine "Big Balls." They're these closed balls, and then after a few days they open up and they're the most magnificent looking things. They're really, really pretty. And I got some last week at the flower shop, and when they opened, I was like, "Oh my God." It just made me so happy. And they're bright pink. And so, if you're just looking to do something nice for yourself, or someone you just want to pick up a nice little thoughtful gift for someone, get them some peonies. You know what? I didn't check to see if they're toxic to pets. So, check that first, folks. But, yes. Michael Calore: That's great. Katie Drummond: Mike, what's yours? Michael Calore: So, I'm going to recommend an app. If you follow me on Instagram, Snackfight in Instagram, you may notice that I have not posted in a long time, and that's because I stopped posting on Instagram, and I basically just use it as a direct message platform now. But there are still parts of my brain that enjoy sharing photos with my friends, so I found another app to go share photos on and it's called Retro. Lauren Goode: Yeah, Retro. Michael Calore: So, it's been around for a while, but I went casting about for other things out there, and I found that there was a group of my friends who are on Retro, and I was like, "Oh, this is great." It's very private. By default, somebody can only see back a couple of weeks. But if you would like to, you can give the other user a key, which unlocks your full profile so that they can look at all of your photos going back to the beginning of time, according to whenever you started posting on Retro. I really like that about it, the fact that when I post a photo, I know exactly who's going to see it. There are no Reels, there's no ads, there's no messaging features, there's no weird soft-core porno on there, there's no memes. It's just pictures. And I really like that. It's like riding a bicycle through the countryside after driving a car through a city. It's like a real different way to experience photo sharing, because it's exactly like the original way of experiencing photo sharing, and I'd forgotten what that feels like. Katie Drummond: Oh, it sounds lovely. Lauren Goode: What's cool about the app too is when you open it and you haven't filled out that week's photos, when you tap on it, it automatically identifies those photos from that week in your camera roll. It's like, "You shot these photos between Sunday and Saturday, and here's where you can fill this weekend." Michael Calore: And— Lauren Goode: It's pretty cool. Michael Calore: And all the photos from the week stack up. So, if you post 12 photos, and then you look at my profile, you can just tap through all 12 photos, and then that's it. That's all you get. Lauren Goode: Good job, Nathan and team. Michael Calore: Who's Nathan? Who are you shouting out? Lauren Goode: Nathan Sharp is one of the cofounders of it. He's a former Instagram guy. I think his cofounder is as well. It was founded by two ex Instagram employees. And the whole idea is they're trying to make, it's not the anti-Instagram, but it is more private. Michael Calore: Feels like the anti-Instagram right now. Lauren Goode: It's nice. It's a nice place to hang out. Michael Calore: Well, thanks to both of you for those great recommendations. Lauren Goode: Thanks, Mike, for yours. Katie Drummond: Yeah, Mike, thanks. Lauren Goode: Thanks, Mike. Katie Drummond: Bye. Lauren Goode: See you on the jet. Michael Calore: And thanks to you for listening to Uncanny Valley . If you liked what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, please write to us at uncannyvalley@ We'd love to hear from you. Today's show is produced by Kyana Moghadam, Amar Lal at Macro Sound mixed this episode. Jake Loomis was our New York Studio engineer. Daniel Roman fact-checked this episode. Jordan Bell is our Executive Producer. Katie Drummond is WIRED's Global Editorial Director, and Chris Bannon is the Head of Global Audio.

The Hottest Topic at This Year's Pornhub Awards? Censorship
The Hottest Topic at This Year's Pornhub Awards? Censorship

WIRED

time15-05-2025

  • Entertainment
  • WIRED

The Hottest Topic at This Year's Pornhub Awards? Censorship

By Lauren Goode and Manisha Krishnan May 15, 2025 1:22 PM Lawmakers' ongoing attempts to ban pornography in the US didn't stop Pornhub from hosting their annual awards show last week. On Uncanny Valley , we discuss the fate of the porn industry. Photo-Illustration:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Pornhub is currently not available in more than a third of US States, due to new age verification laws. And just last week, two Republican senators introduced a bill which could ban pornography across the country. The looming threat on the industry was not lost on some of the biggest names in the adult film industry at this year's Pornhub Awards. In fact, it was central to the event's theme. WIRED's Manisha Krishnan was there, and on this week's episode tells us all about the event, and how Pornhub's story is at the center of tech and politics today. You can follow Lauren Goode on Bluesky at @laurengoode and Manisha Krishnan on Bluesky at @manishakrishnan. Write to us at uncannyvalley@ Mentioned in this episode: Your Favorite Porn Stars Are Sick of Being Censored. But They're Not Going Away by Manisha Krishnan The Biggest Dating App Faux Pas for Gen Z? Being Cringe by Elana Klein North Korea Stole Your Job by Bobbie Johnson How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Lauren Goode: Hey, this is Lauren. Before we start, I wanted to just take a second to remind you that we want to hear from you. If you have a tech-related question that's been on your mind or just a topic that you wish we'd talk about more on the show, you can write to us. We do read our emails, and that email address is uncannyvalley@ And if you listen to and you enjoy our show, we'd love it if you rated it and left us a review on whatever podcast app you're using. We really do appreciate the feedback and it helps other people find the show. Welcome to WIRED's Uncanny Valley . I'm WIRED's senior correspondent Lauren Goode filling in today for Zoe Schiffer. Today on the show, we're going to an unexpected place that can actually tell us a lot about the relationship between tech and politics today. And that place is the seventh annual Pornhub Awards Gala. We're going to talk about how adult performers are dealing with the threat of censorship and how Pornhub in particular has become a case study of how far anti-porn legislation could go. The most recent move happened just last week when a bill aiming to ban all porn in the United States was reintroduced by two Republican senators. I'm joined this week by WIRED's senior culture editor, Manisha Krishnan. Manisha, welcome to the show. Manisha Krishnan: Thanks for having me on, Lauren. Lauren Goode: So you spent the day and the evening with some of the top Pornhub performers at this awards gala. Can you set the scene of the event for listeners and tell us a little bit about the folks you spoke to? Manisha Krishnan: So, I mean, first of all, it was honestly very interesting to just be behind the scenes with them as they were doing all of their glam. We were at the Sunset Marquis Hotel after they got ready in their cowboy outfits, which were just so over the top. I mean, one of them was literally wearing a belt as her bottom. Lauren Goode: As one does. Manisha Krishnan: Yeah, they were beautiful designs. They were designed by this designer, Chris Habana, who has worked with Beyonce before. But anyway, so they all got dressed and then they let loose on the hotel and just were taking very sort of family photo style, but maybe the creepiest family photos you've ever seen, just sort of all over the hotel grounds. And then they piled into this party bus and we rolled up to the Saddle Ranch Chop House, which is where the event was at, and they took a bunch more photos. There was a red carpet featuring all of the porn stars on wanted posters. And then we went inside for the awards. And the awards themselves were very informal. It was 90% just partying vibes and then very, very quickly doing award stuff. Lauren Goode: All right. So you mentioned that the theme of the event was country. Was that weird at all given the kind of pushback that porn sites and performers are getting from a lot of Southern states in the U.S.? Was this theme intentional? Manisha Krishnan: Yeah, so most of these age verification laws have passed in the South, which means that Pornhub is not available in most of the South. So it did feel a little bit tongue in cheek that they were playing up this country theme, these sort of wanted posters. I asked them if they were trying to troll conservatives by doing that, and they would neither confirm nor deny that. But certainly there was a huge sort of political backdrop to all of this, to this big celebration of porn when you have kind of a war on porn that's happening right now. Lauren Goode: Tell us about this war on porn. What exactly is happening? What measures have been taken? Manisha Krishnan: In 2022, Louisiana passed the first sort of age verification law, which means that it's now the porn website's responsibility to ID people and make sure that they're at least 18 if they're going on their website. And so since then there's been around 20 states that have followed suit. And as a result, Pornhub has pulled out, excuse that pun, but Pornhub has pulled out of most of those states, so it's currently not available in around 17 states. It did actually try to sort of play along with the Louisiana age verification law, and I was told that they lost 80% of traffic and that people just didn't want to sort of give their IDs and their ID information when they were logging onto a porn site. They actually also raised this point that they actually have very buttoned up verification for their performers and the people uploading to their site. And what they're saying is that when they remove their content from a state, they think that people might actually go and use even sketchier websites essentially that aren't as rigid and don't have as many standards in place. Lauren Goode: Because these bans are specific to Pornhub, not porn in general. Manisha Krishnan: They're about any porn website. So basically what these bans say is if you are a website that has porn, you need to take on the responsibility of IDing the users of the site. But that is usually done through a third-party service, the actual verification part of it. And so Pornhub is saying, "We're not going to do that. We think it's risky." And so instead, they are removing their content from the states that have passed these laws. Lauren Goode: And now there's a bill that's attempting to ban porn on a federal level for the third time. What are the chances of it actually passing and becoming law? Manisha Krishnan: So it's kind of tough to say what its chances are. Basically what this bill is arguing is that they want to change the definition of obscene. They're saying that the definition of obscene is too vague and that it doesn't apply to the world of modern internet porn, and that internet porn is so depraved and so easy to get access to, especially for young people. And so they're trying to sort of change that definition and give it a hard definition of any sort of actual or simulated sexual acts would effectively be criminalized. Now, as far as the chances of that passing, I think we'll have a better idea in a few months because there's actually a Supreme Court case right now that is looking at Texas's age verification laws. So what's happened, just backing up a little bit, with all these states that have passed all of these age verification laws, there have been lawsuits coming from both sides. So there have been porn entertainment companies that have been suing the states arguing that these laws are a violation of free speech. And also some of these states have been suing these companies saying you have failed to comply with our age verification laws. And so right now the Supreme Court is reviewing one of these cases, and I think when they issue their decision, which is expected later this year, we'll have a better sense of how likely it would be for a federal criminalization bill against porn to pass. Lauren Goode: What would you say is the driving force behind all of these laws or attempts at laws? Manisha Krishnan: I mean, I think it's part of this sort of wholesale return to super Christian values, traditional values that we're seeing. And we saw a lot of that driven by Project 2025. So one of the co-authors of Project 2025, Russell Vought, there was a British nonprofit journalism center that sort of published a video of him last year where he said that these age verification laws were a backdoor way of having a national porn ban. And so he said what we were hoping for is that porn companies would just respond by pulling out of these states, which is exactly what Pornhub is doing. And then in that same sort of clip, he talks about wanting people to have lots of babies. And Lauren, I know that you guys have talked on this show about the whole pronatalism movement. So I think all of these things are sort of hand in hand. Lauren Goode: So what actually is the connection there in the mind of people like Russell Vought or people who are supporting these more traditional values? Manisha Krishnan: I mean, I think ultimately porn performers are sex workers. And sex workers are generally just a super marginalized community. They have always kind of been under some sort of attack were they're ideological or actually criminal. And so I think that that segment of society doesn't really necessarily fit into this larger vision of traditional Christian values. Even though, I mean, Asa Akira, she's one of the porn stars I interviewed for the Pornhub Awards, she is a very famous porn star and she's also a mom of two. So in their world, it's not necessarily mutually exclusive, being a good mom and being a porn star. Lauren Goode: Right. And your great story on You were writing about how her kids don't know, they're young and they don't know exactly what her mom does. She says she's a video producer. And at some point she and her partner will have the conversation with them. But yeah, to her it's a job. This is what she does for a living. Manisha Krishnan: I mean, I think it's a bit more than a job. She definitely has a passion for it. But once she decided she wanted to get pregnant, that was about seven years ago, she stopped shooting partner scenes. So now she only sort of self-publishes videos of herself on sites like OnlyFans and Pornhub, which is also kind of interesting the way that these sites have revolutionized the porn industry and sort of made it easier to control exactly what type of porn you're performing in. Lauren Goode: Manisha, let's take a quick break, and when we come back, we're going to talk about the different kinds of pushback that Pornhub has been getting about its content. Welcome back to Uncanny Valley . So it feels important to differentiate between the different kinds of pushback that Pornhub has been receiving. The things we've been talking about so far, mostly traced to conservative lawmakers disagreeing with sexual content being available on the internet. There are plenty of people who find porn offensive or it doesn't align with their values. But back in 2019 and 2020, reports started coming in of Pornhub just being rife with sexual trafficking and child abuse videos. And a few years later, Pornhub admitted that it actually had been receiving proceeds from sex trafficking content. It ended up agreeing to a three-year monitor or basically an independent organization keeping tabs on whether Pornhub is actually keeping this content off of its site. How should we be thinking about these different approaches to basically moderating Pornhub, right? There's the legislation that has protective purposes, and then there's the legislation that has an ideological bent. Manisha Krishnan: Yeah. So I mean, definitely Pornhub is kind of on this image rehabilitation journey. They obviously have been sort of accused of a lot of horrible things. They've admitted to profiting from sex trafficking content, meaning videos showing abuse that were on the site and were monetized. But since then they have sort of said, "Okay, we're going to be about transparency. We're going to try to help raise the bar in the industry by, for example, coming up with a set of good practices for porn operators to adopt in order to prevent the publication of child sexual abuse material." They also have really buttoned up seemingly... I mean, they say... I should say they say that they have buttoned up a lot on sort of verifying the people, the identities of the people who are uploading to their site, as well as getting consent and releases from all the performers who appear in the videos uploaded to their site. So I think that Pornhub would say and does say that they're not against age verification. In fact, one of the VPs told me that she would support some sort of age verification that happens through your phone, so potentially through an iOS update or something like that, rather than having that responsibility offloaded to a third-party site. Lauren Goode: And what's interesting is this is not the first time that big tech platforms have suggested that that age verification should happen at the level of the operating system, that basically Apple's iOS or Google's Android OS, the folks who run the app stores should be responsible for the age verification. This has been happening with dating sites, dating apps. But Google and Apple have maybe for understandable reasons pushed back on that responsibility. Manisha Krishnan: Yeah, I mean, it's interesting because it shows that nobody really wants to take on this liability of identifying people. I'm sure it opens you up to lawsuits. What Alex Kekesi, who is Pornhub's VP of branding and community, what she told me was that she's really hoping that the different stakeholders, the governments, tech companies can come together and create some sort of solution. But it does seem like there's a lot of passing the buck around in terms of who is actually going to verify people's ages. And the other thing she said to me was that she thinks that people would be more comfortable with it happening on their phone than giving their identification details away to a third party sort of verification source or a website. Lauren Goode: Interesting. What do you think the most likely outcome is here of all of these measures, whether it's around age verification, whether it's about porn sites being allowed at the state level, or whether there could potentially be some kind of federal regulation that effectively bans porn? Manisha Krishnan: I mean, a few years ago I would've said that a federal porn ban seems crazy. I just wouldn't have thought that it could come to that. But we are living in a different era right now that seems extremely top down, top heavy. So I wouldn't be surprised. I mean, so many of the goals from Project 2025, as our colleagues have reported on, are already happening. So at this point, it really wouldn't surprise me. But the one thing I do know is that people will find a way to get their hands on porn. Every time one of these bans happens, there's a huge spike in how to get a VPN. So there's always sort of loopholes and sometimes by taking something away, you're just sort of leading people in a direction to something that might be worse than whatever the original thing was. Lauren Goode: I mean, and it also forces a definition of porn that I think is hard for a lot of people to agree upon. Some people would probably say that they're served porn-like content on Instagram. Manisha Krishnan: Some people might say my article was porn. I mean, just because of the photos. It is, it's a really slippery slope when you try to put a super hard and fast rule on things like this. Lauren Goode: So I have one last question for you. Who actually won the Pornhub Awards this year? Manisha Krishnan: I feel like I won the Pornhub Awards this year because in talking to so many stars about censorship, I found out that one of them calls his thingy a meat missile in order to avoid social media censorship. So I feel like that was a good little one for my vocab. Lauren Goode: A meat missile. Who is the actor who calls it that? Manisha Krishnan: He is called the Girth Master. He's 6'6" and he won Bestie. He actually tied for Bestie with a Spanish performer. And because they're both not Americans, when he accepted, he was like, "I guess this is another reason immigrants are good." And it was so funny. Lauren Goode: That feels like the perfect place to take another break. When we come back, we're going to do some recommendations and tell our listeners what else they should check out on this week. Welcome back to Uncanny Valley . I'm WIRED senior correspondent Lauren Goode. And I'm joined this week by WIRED senior culture editor, Manisha Krishnan. Before we take off, Manisha tell our listeners what they should absolutely read on this week in addition to your fantastic story about the Pornhub Awards. Manisha Krishnan: Yeah, so I would encourage everyone to read Elana Klein's piece on Gen Z's crippling fear of being cringe on the dating apps. I think it's just such a great slice into their mindset. And basically it's like everything from just sharing that you want a long-term relationship to actually earnestly saying what your hobbies are is considered extremely cringe. Essentially, any type of vulnerability is a massive faux pas, and yet a lot of them are super lonely and struggling with connection. I feel like these two concepts are very related. They're very judgmental in the piece, but also hilarious. And so I just love those types of stories and I think everyone should read it. Lauren Goode: What's an example of being super cringe on a dating app? I'm just asking for a friend. Manisha Krishnan: Just, okay, the first one I thought of and I kind of agree with this one was the guys who post the giant fish. Lauren Goode: Oh, they have the fish. Manisha Krishnan: Yeah. That one was like a classic one. And what about you, Lauren? What are you recommending this week? Lauren Goode: I really enjoyed Bobby Johnson's feature story, we're calling it The Big Story, on how a bunch of pretenders, as they're called in the story, from North Korea are interviewing for US-based tech jobs, IT jobs and basically perpetuating this whole scam and working with third parties in the US who are helping facilitate them, where they interview with these very sort of Anglo-sounding names and they seem to have great resumes and good coding abilities, and then when a recruiter goes to interview them, they present as Asian and they have accents and the recruiter's kind of confused, but they end up getting a job and siphoning away money from the US and giving it to North Korea's government. It's a fantastic story by a freelancer named Bobby Johnson and I recommend everyone check that out. In case you weren't paranoid enough already about what's going on online, this will make you more paranoid. That is our show for today. We're going to link to all of the stories we talked about in the show notes, so check those out. And be sure to check out Thursday's episode of Uncanny Valley , which is all about what DOGE has accomplished, what it hasn't, and what it's all potentially going to look like after Elon Musk's exit. If you liked what you heard today, be sure to follow our show and rate it on your podcast app of choice. And if you'd like to get in touch with any of us for questions, comments, show suggestions, write to us at uncannyvalley@ Jordan Bell, Kyana Moghadam, and Adriana Tapia produced this episode. Amar Lal at Macro Sound mixed this episode. Jordan Bell is our executive producer. Conde Nast's head of global audio is Chris Bannon, and Katie Drummond is WIRED's global editorial director.

The Dangerous Decline in Vaccination Rates
The Dangerous Decline in Vaccination Rates

WIRED

time01-05-2025

  • Health
  • WIRED

The Dangerous Decline in Vaccination Rates

Measles Vaccinations offered by Harris Public Health are photographed on Saturday, April 5, 2025 in Houston. Photo-Illustration: WIRED Staff; Photograph:All products featured on Wired are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. In the year 2000, measles were declared eliminated from the United States. But thanks to declining vaccination rates, Americans may have to contend with a much scarier future for the deadly disease. Today on the show, we talk about the state of measles, and we explain the role Robert F. Kennedy Jr., Secretary of Health and Human Services, has played in the shifting culture around vaccines in America. You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: How's everybody feeling? Katie Drummond: I'm on the road this week, which listeners might notice, but I feel okay, I feel good. Lauren, how do you feel? Lauren Goode: I'm doing good, actually. I'm feeling better than I've been feeling in months. Katie Drummond: Wow! Lauren Goode: Somehow, some way. I'm not on the road. I think maybe that's why, I've had a long bout without travel. Katie, I thought you were going to say that after you came back from France and you developed this new French butter habit that you were feeling better than ever before. Katie Drummond: I do. The butter is so life-affirming. I've been eating a lot of French butter. And I feel great! I feel incredible. Lauren Goode: Amazing. Did you run five miles this morning? That's what I want to know. Katie Drummond: I ran seven miles this morning. Sorry. Lauren Goode: Stop it! Michael Calore: Seven? Katie Drummond: Yeah. Michael Calore: Wow. Lauren Goode: Flex. Michael Calore: I have run zero miles in the last month. No, no, that's not true. Lauren Goode: No, we went running. Michael Calore: In the last two weeks. We did. Lauren Goode: Yeah. Michael Calore: It's an important part of my health routine and when I don't do it, I definitely feel it. Katie Drummond: Oh, yeah. Lauren Goode: Yeah, Michael Calore: Today we are going to be talking about our health, and not just our own health, but the health of all Americans because it has been on everybody's mind lately. Here at WIRED, we've been reporting on the current administration's dismantling of the public health agencies and defunding of research programs. We've tracked all the ways that Elon Musk and his DOGE cohort have been hoovering up all of our sensitive health data and the sensitive health data of millions of Americans without offering a clear explanation of why they're doing it. And we've been watching the shift in culture around vaccines in America, and changing attitudes about what the government's role should be in our collective well-being. On this episode, we're going to talk about all of that. We'll talk about the measles outbreak. We'll talk about all of the other health crises that we really thought we would never have to talk about anymore. But it won't be all heavy stuff, I'm sure we will find a way at some point to have some fun in this show. Katie Drummond: We just had fun. 30 seconds ago we were having fun. Lauren Goode: Can we just go back to talking about butter and running? Katie Drummond: We'll have fun again. We will have fun. Michael Calore: We promise. This is WIRED's Uncanny Valley , a show about the people, power, and influence of Silicon Valley. I'm Michael Calore, director of consumer tech and culture here at WIRED. Lauren Goode: I'm Lauren Goode, I'm a senior writer at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's global editorial director. Michael Calore: Let's start by talking about the measles outbreak. Oddly, this is not the first time that we've brought up measles on Uncanny Valley . Because last month, Katie, you talked with Emily Mullen from our science desk on one of our Tuesday episodes about the rise in measles cases that have happened under the watch of Robert F. Kennedy, Jr., who is the country's head of health and human services appointed by the Trump Administration. Can you tell us what is going on with measles? Katie Drummond: Unfortunately, I can. I wish that I didn't have to talk about measles, but here we are. Look, measles cases are on the rise in this country. Measles has not been a going concern in the United States for a very, very long time. But that is because of vaccinations. That is because of successful campaigns to get American parents in particular to vaccinate their children. But an outbreak in Texas that started earlier this year is changing that. We have seen more than 600 cases of measles and two deaths, two children who have died from measles this year alone. It's the largest outbreak of measles in Texas since 1992. Nationally in the United States, we've seen 800 cases of measles so far this year, which is the most since 2019. Just by comparison, take that 800 number so far this year, last year, how many measles cases did we see in the United States in the entire year? 285. This is an exponential increase in measles cases in the United States, primarily concentrated in Texas around that outbreak. Michael Calore: We also have some new research about measles that we should talk about, some research out of Stanford. Lauren, your alma mater. Lauren Goode: Oh my gosh. I wish Zoe was here just so that she could groan and say, "Do we need to talk about Stanford again?" Yes, there is new analysis from epidemiologists out of Stanford, our colleague Emily wrote about this last week. The research was published in the Journal of American Medical Association and they used a computer model basically to look a little bit into the future. And determined that, with current state-level vaccination rates, measles could reestablish itself and become consistently present in the United States in the next two decades. Then they tried a variety of different simulations and their model predicted this exact outcome in 83% of the simulations that they did. What's interesting is that if the current vaccination rates just stayed the same, the model estimated that we could see more than 850,000 cases, 170,000 hospitalizations, and 2500 deaths over the next 25 years. We need to get our vaccination rates higher, at a better level, in order to thwart that. Basically, they said measles could become endemic if we don't course-correct quickly. There is a difference between endemic, an epidemic, and pandemic, which we've all just been living through. It's still not good. Michael Calore: Right. Lauren Goode: No matter which way you look at it. Michael Calore: Right, because it's a deadly disease. Lauren Goode: Correct. Michael Calore: This goes beyond just measles. What we're talking about is the MMR vaccine, measles, mumps, and rubella. But there are other vaccines that people are supposed to be taking that they're not taking and it's mostly among children. I think all the numbers show that kindergarten vaccines are down, which is one of the big factors that public health experts study. Lauren Goode: That's the scary part. By the way, for all of us here, do you remember getting MMR? Michael Calore: Oh, of course. Lauren Goode: Right. I'm looking at Katie, too. I'm like, "Katie?" Katie Drummond: I was just going to, as you were asking that question, in my head I was like, "Who remembers what they did when they were five?" Lauren Goode: Well, that's the thing. Katie Drummond: I don't remember that happening to me, but I know from ... I remember I think when I was pregnant, you have to get some vaccinations and some updates. I think I remember checking in with my dad just to be sure. But it wasn't really a question in our family. I think for still, the majority of families in the United States, it's not really a question of whether or not you're going to vaccinate your children, whether or not you were vaccinated. But there is this growing minority of people who are making a different choice, who are choosing not to vaccinate their kids, who are changing the vaccine schedules, who are spreading out vaccinations because they incorrectly think that that is a safer way to vaccinate. It is that minority as those percentages makes a really, really big difference when you're talking about herd immunity and you're talking about protecting an entire community. But no, I don't remember being vaccinated, but I was. Lauren Goode: That's exactly it. I don't remember the shot going into my arm, but I remember it was just standard that you got MMR. Then subsequently, for example when I did go to grad school, which happened to be a little bit later in life, I was in my early 30s, I remember asking my mother because I literally would not have been allowed to go to school if I didn't have evidence of these vaccines. We were looking for a little piece of paper- Katie Drummond: Right. Lauren Goode: ... from the late 1980s that had this. But it was just assumed that we did it. The vaccines we're talking about here include MMR< measles, mumps, rubella, DTaP, polio, and chicken pox. The drop in vaccination rates is especially dangerous for babies and kids. That decline, which I believe is at the state level from 95 to 93% vaccination rates, may seem small. But when you consider other factors, like how contagious some of these diseases are, how contagious measles is, that's the alarming part. Michael Calore: Yeah. All of this data coming out, and the outbreak, and vaccine hesitancy that we're seeing in society are happening at a moment when we have a new cabinet secretary for health and human services, Robert F. Kennedy Jr., who has brought many of these beliefs about vaccinations being bad and about how public health should be managed into his job. What kind of energy is he bringing? We all know the answer to this, but I want to break it down. I want to talk about what his role in this moment will be. Katie Drummond: I think it's really important to be really, really clear about RFK Jr., about his legacy, about the damage that he and others have done to this country, and to the integrity of trust in science and in scientific research in the United States. I think one of the really interesting things we're seeing play out now with RFK Jr. is that he is walking back, or modifying, or trying to tread this very careful line where he doesn't come out and enthusiastically deny that vaccines are safe and effective, which they are. But he doesn't want to go so far in the other direction, either. He's essentially trying to launder his history in the eyes of the American public. But the reality is Robert F. Kennedy Jr. has been leading the charge against vaccinations in this country for decades. He was the chair of the Children's Health Defense, which is a nonprofit that campaigns very vigorously against vaccinations. He has many times suggested things like that vaccines cause autism. I remember during the pandemic he said that COVID-19 is targeted to attack Caucasians and Black people. He said, "The people who are most immune are Ashkenazi Jews and Chinese." More recently, we have seen him try to tread the line where he is essentially saying things like, "People should think about vaccines. They should talk to their doctor. This is a personal choice." I don't really think it is actually a personal choice. I think it is a choice that you make with the knowledge that you live among a community of other people. You don't necessarily get vaccines just to protect yourself or just to protect your child. You get vaccines to protect the entire community that you live within. This is a population-wide imperative. That's something that I think even now in his current role, where he does need to tread a more careful line or he is trying to tread a more careful line, he has failed spectacularly to communicate that to the American public. Lauren Goode: Katie, right now, is Kennedy in support of MMR, or is he still toeing the line on vaccines? What's the latest? Katie Drummond: Well, I think the most recent comments he has made about MMR, after months of a lot of pressure and a lot of back-and-forth, he said, "The MMR vaccine is the most effective way to prevent measles." He has said that. That being said, in recent months he has also said things that directly contradict that statement or that call that statement into question. He did an interview with Fox News in March, so just a little over a month ago, where he said, "There are adverse events from the vaccine. It does cause deaths every year. It causes all the illnesses that measles itself causes, encephalitis and blindness, et cetera. People ought to be able to make that choice for themselves." I want to be very clear here. Healthy people, generally speaking, healthy kids, healthy adults who go get the MMR vaccine do not die from that vaccine. That is not a thing. What he is saying is false. He's saying it on Fox News and he's saying it to millions of Americans. Many of whom, if they are regular viewers of Fox News and regular consumers of right-leaning and far-right news organizations, they are already asking questions about vaccines. They are already potentially deciding not to vaccinate their kids. Maybe they are deciding not to get their own vaccines, not to get the flu shot every year. They are already a vulnerable community of people. What he is doing in interviews like that is he is further sowing doubt in that community, in those populations of people around the safety and efficacy of these vaccines. I don't really care if, at some point now, he says the MMR vaccine is the most effective way to prevent measles. Well, cool, dude. You have spent the last several months in your role as a government official, and the last several decades as a high profile person on this planet, telling everybody that this vaccine and other vaccines are not safe. That they might kill you. That is not true. Michael Calore: We're snapping our fingers here in the studio, but we need to take a break and we're going to come right back. Welcome back to Uncanny Valley . Ever since Donald Trump's inauguration and Robert F. Kennedy's approval as the cabinet secretary for health and human services, we have seen a rapid dismantling of HHS and all of the agencies that work underneath it. They're not going away, but there have been jobs cuts, there have been consolidations, there have been funding cuts for research, and all kinds of chaos. Where should we start with what's been going on in Washington? Katie Drummond: Oh, boy. These are, as WIRED and so many other outlets have reported, these are huge cuts. These are tens of thousands of employees at these agencies losing their jobs. I think it's important to note, just in the context of this conversation today, that at the same time as RFK Jr., and DOGE, and the administration are making these sweeping cuts to federal health agencies, they are also targeting what appears to be a lot of vaccine-related infrastructure. I think one of the most notable examples to me and something I found particularly disturbing is that the NIH is actually asking researchers to scrub references to MRNA vaccine research in grant proposals. Essentially suggesting, we don't know for sure, but there are strong indications that the federal government under Donald Trump and these health agencies under RFK Jr. will be deprioritizing MRNA vaccine research. Now, I should remind everyone that MRNA vaccines and the incredible research that has allowed them to be possible is the reason several years ago we were able to get shots in arms to make sure that millions more Americans, not to mention people around the entire world, did not die of COVID. MRNA vaccines were the key to thwarting a devastating pandemic. This was just a couple of years ago. It's an incredibly promising field of research, and it's one that now potentially looks like it's at risk because of the approach RFK Jr. is taking to what he describes, and the way he talks about it, is vaccine safety. "We need to make sure these things are safe." God forbid, everybody die of measles because they got a vaccine for it. Which, again, doesn't happen. That under the auspices of safety, that really promising, experimental work into vaccination technology will not happen. That's what really stands out to me from all of this, among other things. Michael Calore: They have to know that this is going to have a destabilizing effect on the health of Americans. Because the plan that they're instituting right now involves not only rolling back research and funding towards new medicine, but also vital systems that mostly people who have lower means in our society use in order to access healthcare. Healthcare for minority communities, healthcare for people who are suffering from addiction issues, healthcare for people who are single mothers who are on public assistance. These are the programs that are all being rolled up into a new administration called the Administration for Healthy America. When those programs are rolled up, they're going to be smaller and they're going to have less funding and fewer people working there than they did. It's this odd moment that we have where not only are we having less research and less effort put in to finding new cures for things, but we're also providing less public support in general. My big question is what's the plan here? What do we expect is going to happen? Katie Drummond: What we expect is going to happen is that some of these illnesses that were very much under control in this country will, as Lauren said earlier in the show, become endemic again. Or that the next COVID, the next devastating pandemic, we will not have the resources to contain that pandemic and communities won't have access to the information, let alone the vaccinations that they need, to take care of their families. Some of the grants that have already been canceled in this mass culling of federal agencies and this realignment of federal priorities, these are grants that provide measles vaccination centers in Texas. Mike, you were talking a minute ago about what the administration thinks is going to happen and what the plan is here. I think it's one of two scenarios to me, and neither one is particularly reassuring. One, scenario one is that they genuinely think that the United States will be a healthier country if they eliminate experimental research into vaccinations, if they provide less access to this medical care to communities across the country. They might actually think that, based on what they seem to believe, what RFK Jr. seems to believe, that this will be a healthier country if that happens. That's scenario one. Scenario two is that they just don't care. Especially when we're talking about vulnerable communities. One of two scenarios, not sure which one it is, don't like either of them. Michael Calore: Yeah. I feel like both are on the table right now. The goal of DOGE is to eliminate waste, fraud, and abuse, that's something that you see in all of the executive orders and all of the communications coming out of the government right now. "We're stamping out waste, fraud, and abuse." Sure, there was probably some waste, there was probably some fraud. I'm not so sure about abuse. But the wholesale dismantling of these programs that people rely on for their day-to-day lives to work just doesn't feel like the right path forward for America. I do not feel bad about saying that on a podcast. Lauren Goode: No. I think all of this actually threatens to make America as a nation weaker. Not great again. The MRNA research that Katie mentioned earlier that led to the COVID vaccines, that led to "Operation Warp Speed, look we've done this so quickly," was actually years in the works, the foundational technology for MRNA. All of our most pivotal research around cancer treatments and other diseases takes years. Then when we have a fractured system, a fractured healthcare system, we also become unable to respond as quickly as we should be able to to threats of bio-terrorism. Just picture all of the misinformation that, in some ways, we've been faced with for years, but now it's amplified because of internet culture, too. Just picture all of that flying around in a moment, in a very acute moment of needing a clear leader with evidence-backed knowledge in the room, and we don't have that right now. Michael Calore: Yeah. I want to dig into something you just said, which is internet culture. Lauren Goode: It's a big part of this. Michael Calore: Yeah. Can you talk us through how big of a part it is, and what the influencers are doing in this moment? Lauren Goode: Well, one of the trademarks of the wellness industry and particularly on the internet is that it doesn't really have an established standard of credibility. And that it's constantly suggesting information, and tips, and hacks to people that put something just out of reach for them. Just one more thing that you should be doing to optimize your health, it keeps that machine turning. There was this cultural critic that popped into my feed recently and I can't remember his name, but he made a great point about how what happened after GLP1s became widely accessible to people. That you started to see all the wellness influencers start to hype Pilates. Pilates is having a moment because it's the next thing in this flywheel of health hacks that just makes it a little bit more expensive, inaccessible, a thing that all the celebrities are doing that you can't do, but you should be doing. In a nutshell, that is the health and wellness industry online. You combine the psychology of that with the fact that a lot of people do feel utterly disgusted with the US traditional healthcare system, with health facts that seem to give you this sense of control, with a total lack of enforcement around bogus health claims on the internet, and then you add someone like RFK Jr. to the mix, who is supposedly speaking from this position of authority. It's a powder keg. It's this non-toxic, fluoride-free vitamin A powder keg. Katie Drummond: Save us from fluoride, Robert. Lauren Goode: Right. Save our teeth from fluoride. Then occasionally, you have these outlier examples that come up that end up supporting these claims. Someone does happen to have an adverse reaction to a vaccine. A family member who has been shunned by traditional healthcare, but actually did self-diagnose and is now thriving. It becomes an example in people's mind. Very occasionally, RFK Jr. will say something that a majority of people can glom onto, like wanting to ban those ridiculous pharmaceutical ads you see on television. Everyone goes, "Okay, yeah, that makes sense." Michael Calore: Yeah. Lauren Goode: You just combine all of these things and you just have this perfect storm of the potential for misinformation that actually seriously harms people's health. Michael Calore: Yeah. Katie Drummond: Yeah, I think that that's all right. I was thinking about this last night, knowing that we were going to be talking about this today on the podcast. I remember in 2011, I went to Minneapolis. I reported a story about how Andrew Wakefield, if that name rings a bell, it should. Andrew Wakefield was spending a lot of time with the Somali community. There's a very large community of Somali immigrants in Minneapolis. He was spending a lot of time with them and essentially telling them not to vaccinate their kids. That the MMR vaccine caused autism. He created this massive public health catastrophe in this one city in this one community that he decided to target. This was in 1998. Andrew Wakefield published a paper saying that, "It sure looks like this vaccine causes autism." That was this seminal turning point to me, at least in our lifetimes, around this idea of vaccine hesitancy. Around this idea of just asking questions about vaccines, which ultimately became a very dangerous thing. I think through the 2000s, we're now talking about 25 years of history, through the 2000s, the emergence of social media, of online connectivity really built up this mistrust and this anti-vaccine crusader movement. Along with very smart use of anti-vax activists, the use of celebrities. People like Jennie McCarthy, who I remember came out and said, "My son has autism and I think that vaccines are the reason." Michael Calore: Yeah. Katie Drummond: Celebrities like Jennie McCarthy and RFK Jr., who has spent the last 20-plus years of his career parroting a lot of the language, a lot of the ideas that Andrew Wakefield, who has been discredited over, and over, and over, and over, and over again. But you have people like RFK Jr. picking up that mantle and taking it, and then feeding it into this social media machine where, to Lauren's point, now that information, that misinformation more accurately, inaccurate information can propagate and reach communities not only all around the country, but all around the world. You saw it in the late '90s with Andrew Wakefield, and it really to me, as I think about it in my lifetime, it has just metastasized from there. It has been taken on by high profile people. It has made its way onto the internet. It has made its way into influencer culture. Of course, then the COVID pandemic was the perfect storm for all of this to spiral I think really out of control. Lauren Goode: I think we all remember that moment during the COVID pandemic when information was scant. I think we were all very afraid. Donald Trump said something about injecting disinfectant into your body. I think we can safely say that was misinformation. Katie Drummond: Yes, I think we can safely say that injecting bleach or whatever horse medication was being bought up across the country from desperate people ... Look, COVID was this very, very scary, very isolating, very unprecedented moment in American history, in world history. You think about that moment, the year 2020. Well, we all just spent the last 20 years being fed anti-vax, or at the very least just asking questions about vaccines kind of narratives. First, in the analog media, and then through social media and all over the internet. You have people who are already very alienated from the US healthcare system. They don't trust big corporations, they don't trust big pharma. There are good reasons for all of those things. They're sitting at home, they're by themselves. They don't have access to their broader community. What they do have access to is the internet and they start hearing the President of the United States talking about injecting God knows what nonsense into their veins, and there you have it. I think COVID was the rock bottom moment for this country, at least so far, in adoption of vaccines. I will say, I have family members, I'm sure so many people listening do, maybe you guys do, too. I have family members who chose not to be vaccinated for COVID in 2020 or 2021, and to this day are not vaccinated against COVID because they are scared of MRNA technology, they're scared of vaccines. They think that the vaccines do more harm than good. That is the institutional leadership that has brought us to this moment where we have children in the United States of America dying of measles. That is where we are. Michael Calore: That is a rough place to be. Katie Drummond: Yeah, it sucks. Michael Calore: Okay, let's take another break and then we'll come right back. Welcome back to Uncanny Valley . We're going to shift away from talking about health and we're going to talk about Signal. Because the thing that has been blowing up our group chat this week is, in fact, a group chat. It's a Silicon Valley group chat, it's a bunch of elites talking about God knows what. What do we know about the group chats? Lauren Goode: Katie do you want to take this one? Katie Drummond: Oh, Lauren, this is so yours. Lauren Goode: Well, normally with Overheard, we would talk about something we've each overheard in Silicon Valley, but in this case, we are talking about what Ben Smith overheard. Ben Smith is the founder of a news outlet called Semafor and he published a story this week about the Silicon Valley private group chats that have been shaping politics for years now. There are several chats referred to in this story, but they fall under the umbrella basically of something called Chatham House, which is based on the idea that, in order for people to express their ideas freely, they have to be able to speak in a private space. These chats that Ben reported about are made up of billionaires, venture capitalists, thought leaders, and they're views are mostly right-leaning or even fringe. These are members of a technocratic society who are expressing ideas that they believe would get them canceled online or shot down by the woke mob. Now instead of expressing them on Twitter like they might have a little while ago, they're putting them in group chats. These include reactions to a Harper's Letter that came out back in 2020 that was somewhat controversial. But also, more recently, these guys are responding to Trump's tariffs where people aren't necessarily falling along party lines. They're actually criticizing Trump's tariffs. What's interesting to me, aside from the content of these chats, is that we're in this moment where reactionary right-wing politics are playing out privately in private Signals, but actually have so much influence over the public sphere right now. It's the modern day version of salons. Katie Drummond: That's what I thought was so interesting about the story. It's a fascinating story and honestly it was the kind of story that made me wish there was more. I was like, "Come on, Ben. Get some screenshots, man." Lauren Goode: Get the goods. Katie Drummond: Show us the goods. Lauren Goode: Yeah. Katie Drummond: But it was this idea that, for so many of these people ... We are talking about millionaires, billionaires, we're talking about the wealthiest, most powerful people running businesses or VC firms, or what have you, in this country who are effectively saying, "I'm too scared to go on social media anymore because people are mean to me in the comments. I'm going to go hide with my other rich friends and we're going to talk on Signal instead." That was one of the big takeaways for me was this incredibly thin skin of some of these people to feel like they can't vocalize an opinion or share a point of view on social media. And that they feel like they need to take it to a safe space and workshop it with 100 of their closest billionaire friends, before they can all put it out in public together as a united front. I thought that was fascinating. I would say there are plenty of good reasons to shy away from using social media or sharing your opinions on social media. That is very real, the mob mentality, people going after you for what you think. Or your opinion if you are a public figure, making news in a way you may not like. But I had to laugh at the idea that, for some of these people, the idea of sharing a thought on Twitter of all places, which is a pretty safe space for people with pretty extreme points of view, I will say. That that just felt like too high risk in this woke world that we live in, that they need to go hide away in a confidential group chat to talk about what they really think. I thought that was a little bit ridiculous. Lauren Goode: Well, right. Then the moment that someone says something that goes against their ideologies, like Marc Andreesen says, "I think it's time to take a Signal break." I also thought it was interesting how Marc Andreesen appears to have a couple of lackeys who he just tells to assemble these group chats for him. "Put me in with smart people!" Then someone goes and assembles a group chat of 20 people, and Marc Andreesen apparently is one of the most prolific texters. Katie Drummond: Yes, I loved that. Lauren Goode: Someone else in the article was saying, "I don't know how he has the time to do this. He's much busier than me, and yet he's the most active participant in this group chat." Katie Drummond: Honestly, just imagining him frantically toggling between different text groups throughout his day and his night while trying to do his job made me feel very stressed out. Take a breather. Please. Lauren Goode: Yes. Katie Drummond: Please, just chill out a little bit, man. Lauren Goode: Yes. Katie Drummond: It's too much. But also, add Lauren and I to your group chats. Lauren Goode: Right. I was just going to say add us, you cowards. Add us to your group chats. We are open to joining the group chats. Max Reid did a pretty good analysis of this in his Substack newsletter. He described how this is the perfect confluence of events for people to be radicalized within these group chats. Katie Drummond: Right. Lauren Goode: Because they started back in 2019, 2020, and then Clubhouse was a thing. Clubhouse was a moment when people were saying the quiet parts out loud on Clubhouse. But really, there were all of these little networks and groups that were forming behind the scenes because were sitting home, nothing to do except be online and live online. That led to these people coming together, but coming together along these explicitly political lines, and then radicalizing each other. Katie Drummond: So they're saying the quiet part quietly over Signal, privately. I don't love where all of this leads. I was very glad to see this story come out and to have a little bit of sunlight cast on this phenomenon. Thank you, Ben Smith. You did a good one. Lauren Goode: Yeah, I don't think that there's anything else in Silicon Valley that people are talking about quite as much right now. Maybe we should be talking about other things, though. Michael Calore: Yeah, we did just talk about RFK and Health and Human Services for 30 minutes, so thank you for the levity at the end of the show. Thank you all for listening to Uncanny Valley . If you liked what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, write to us at uncannyvalley@ Today's show was produced by Kyana Moghadam. Amar Lal at Macro Sound mixed this episode. Paige Oamek fact-checked this episode. Jordan Bell is our executive producer. Katie Drummond is WIRED's global editorial director. Chris Bannon is our head of global audio.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store