logo
#

Latest news with #KatieDrummond

Why Silicon Valley Needs Immigration
Why Silicon Valley Needs Immigration

WIRED

timea day ago

  • Politics
  • WIRED

Why Silicon Valley Needs Immigration

A general view of the UC Berkeley campus, including Sather Tower, also known as The Campanile, as seen from Memorial Stadium in Berkeley, California. Photo-Illustration: WIRED Staff; Photograph:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Expanded deportations, a virtually shutdown asylum process, increased scrutiny of H1-B visa applicants—immigration policy has been overhauled under the latest Trump administration. And, just last week the Trump administration said it would begin revoking the visas of some Chinese students who are currently studying at U.S. schools. On today's episode, we dive into the impacts that these changes could have on the tech industry from the talent pipeline to future innovations. Articles mentioned in this episode: The Trump Administration Wants to Create an 'Office of Remigration' by David Gilbert US Tech Visa Applications Are Being Put Through the Wringer by Lauren Goode You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: A quick note before we begin today. We recorded this episode before the Trump administration's travel ban on citizens from 12 countries from entering the United States and before its proclamation to suspend all new student visas for students enrolling at Harvard University. Although we will get to student visas quite a bit in this episode. How's everybody doing this week? Lauren Goode: I'm good. I just got back from Katie's motherland, Canada. Michael Calore: Oh. Lauren Goode: Yeah. Katie Drummond: Lauren and I were in Vancouver together. Lauren Goode: We were. Katie Drummond: Although I saw her for probably 15 minutes in the span of like five days. I'm doing okay. I also, as we just established, was in Vancouver with Lauren at Web Summit. I took a red-eye home on Thursday night and it was three hours late and so that was a lot. Michael Calore: Yikes. Katie Drummond: And then Lauren, right before we started recording just told me that I have a bobble head, so I'm just grappling with that feedback. Lauren Goode: I did not say bobblehead, I said you had celebrity energy because your head presents well on camera. I don't know. Mike, how are you doing? Katie Drummond: Yeah, how are you doing, Mike? Michael Calore: I'm staying out of this one. Also, I have a gigantic head. I can tell you that I wear a size eight fitted cap, which is the largest size that they make. Katie Drummond: Do you want to know what size I wear? Michael Calore: Yes. Katie Drummond: I have to shop at a specialty hat store. Because my head actually doesn't... I can't wear. Lauren Goode: What is this store called? Katie Drummond: I can't wear normal hats. Lauren Goode: Is it called Bobblehats? Katie Drummond: No, I'm going to look it up. It's from Oddjob Hats. The last hat I bought was called Big Running Hat. Just Big Running Hats. Lauren Goode: Do you also have one called Big Walking Hats? Katie Drummond: Probably. Probably. Lauren Goode: Oh. Michael Calore: Oh, it's too much. Lauren Goode: All right. Michael Calore: Should we get into it? Katie Drummond: Let's do it. Lauren Goode: Let's do it. Michael Calore: This is WIRED's Uncanny Valley , a show about the people, power, and influence of Silicon Valley. Today we're going to be talking about the Trump administration's policies around immigration and the effect that those policies are poised to have on the tech industry. Since day one of the current administration immigration policy has been overhauled, the asylum process was virtually shut down, the obscure Aliens Enemy Act was invoked to deport hundreds of people, and birthright citizenship is being challenged in the US Supreme Court. Visas have been under increased scrutiny. WIRED recently reported how the H-1B visa application process is becoming more hostile, and last week the administration said it would begin revoking the student visas of some Chinese students who are currently studying at US schools. So today we're going to dive into the impacts that these changes could have on the tech industry from the talent pipeline to future innovations. I'm Michael Calore, director of Consumer Tech and Culture here at WIRED. Lauren Goode: I'm Lauren Goode. I'm a senior correspondent at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's global editorial director. Michael Calore: I want to start us off by focusing on how the Trump administration has been handling student visas. Just last week, Secretary of State Marco Rubio announced that the administration would start to, "Aggressively" revoke visas for Chinese students. The State Department said it would focus on students from critical fields and those with ties to the Chinese Communist Party, but also that it would just generally enhance the scrutiny across the board. The vagueness of these guidelines has sent students, parents and universities into an emotional tailspin. What do we make of these latest developments? Lauren Goode: So there were actually two directives that went out last week and I'm sure we're going to hear more, but I think they're both worth noting. The first was that a directive was sent to US embassies around the world telling them to pause any new interviews for student and visitor visas, and that included the F, M and J visas, until further notice. And this whole idea was that it was in preparation for an expansion of social media screening and vetting. So basically the State Department is going to be looking much more closely at students' online activity, social media activity, and consider that as a part of their interview process when they're applying for a visa to the US. That was already a part of the application process, but now it's just going to be expanded. We don't really know what that means. The other was the revoking of visas for Chinese students as you mentioned, Mike. And really I think what this does is it adds another tool to this current Cold War of sorts that we're having with China, whether it's with the tariffs or whether it's measures like these, it's clear that the current administration wants to have the upper hand. And what we've reported at WIRED is that if this continues and the courts allow it, this would all have a significant effect on higher education because roughly a quarter of the international student population in the US is from China. And also, this is something I think a lot of people don't realize, I personally didn't realize until I started doing more research into this, international students often pay full tuition or close to it when they come here into the United States for school, which makes it an economic lifeline for a lot of these universities and also in some ways helps offset the costs for domestic students, US students who are getting scholarships or getting partial reduction in tuition and that sort of thing. I do think in general it's dangerous territory to start targeting students under a specific nationality for these alleged national security reasons. There are going to be questions about how effective it is longterm, but also how this could potentially weaken the US technology sector in the longterm. Katie Drummond: Yeah. And I think, Lauren, you're right to point out these two directives and I think that both got a fair bit of press attention, but I was surprised that the first announcement, this idea that we are going to be doing enhanced social media screening and vetting of international students and people applying for visas to come to the United States, the fact that that was not an international outrage when that was announced is very telling to me in terms of how much is happening in the news in the United States every single day because that is a very chilling announcement to be coming from the Secretary of State in this country. It is a massive free speech issue and really speaks I think to what will be an ongoing theme for WIRED and unfortunately already is, which is just the techno-authoritarian world, country that we now live in where these tools are essentially being weaponized to surveil and monitor not only US citizens, but people who proactively want to live and work and study here, that if you dare have an opinion that is contrary to the opinion of the Trump administration, that you could potentially have your visa revoked or not even be able to qualify for a visa. I think it's also important to note that everything that Lauren just spelled out and that we're talking about is part of this much larger conflict that's been unfolding between the Trump administration and higher education. So you have this Ivy League battle playing out between Trump and Columbia, Trump and Harvard. A lot of that obviously having to do with free speech issues and the Trump administration, again, essentially looking for institutions of higher education to adopt their viewpoint as opposed to being places where a plurality of points of view can be discussed and debated and held. There was already an attempt made to block Harvard from enrolling international students. A federal judge has blocked that for now, but we will have to see where it nets out. And I think regardless of where that one legal decision nets out there is, for so many reasons, this chilling effect where the United States is all of a sudden no longer a desirable destination for students, both at an undergraduate level and a graduate level. You have not only the Trump administration basically going to war with the best colleges in the country, you have them going to war with the actual student visa process, and then you have them going to war with research and science and even blocking already billions of dollars of research funding that is earmarked ostensibly for these institutions and now means that these institutions are much less attractive destinations. So it's not like, oh, a judge reverses a couple of decisions or one decision or blocks one thing from happening and all of a sudden we're in the clear again, this is already very clearly becoming a systemic and longterm crisis for the United States. Michael Calore: And this choking off of talent coming into research institutions and into jobs in the United States is also happening at a moment when China and the US are currently involved in an AI arms race. In January, the Chinese AI company DeepSeek showed off a reasoning model that is demonstrably and seemingly just as powerful as ChatGPT, but was developed for a fraction of the cost. So the US definitely needs to keep bringing in top AI talent, but how are these restrictions on student visas going to potentially shape the growth of the AI industry in the US? Lauren Goode: Yeah, this is something that when the news started to trickle out last week, we at WIRED were thinking, "Okay, this is really in our wheelhouse." We cover AI so closely, we have for years, and automatically the question is what does this mean for the AI race? We ended up reporting a story last week, it was myself, a few other WIRED folks, Kate, Louise, and Will, and some of the sources that we spoke to were pointing out the contradiction that exists here in the White House saying that AI is one of its top priorities and then trying to send the people who are doing this kind of research, this critical research for us here in the United States, home back to their home countries, or not letting them into the first place. And it's some US colleges, I would say probably a fair number of them, international students do make up the majority of doctoral students in departments like computer science. One of our colleagues, Kate Knibbs, talked to someone at the University of Chicago who said that foreign nationals accounted for 57% of newly enrolled computer science Ph.D. students last year. We know that immigrants have founded or co-founded nearly two thirds of the top AI companies in the United States. That's according to a 2023 analysis by the National Foundation for American Policy. And this is something that's been going on for a long time. I had this interesting conversation with a well-known economist last week. His name is William Lazonick. I was asking him his thoughts on this crackdown on student visas, and he made an important observation, which is that foreign students pursuing those STEM careers have actually been critical to the very existence of graduate programs in those fields. And some of this is cultural. Back in the 1980s, there was this big shift that was happening in the US around money basically. It was the era of Reaganomics and great is good, and American students were gravitating towards careers in finance. At the same time, Lazonick said, there were significant advancements happening in microelectronics and computing and biopharmaceuticals, and that opened the window for foreign students to say, "We're going to study stem." So what we are potentially on the brink of right now by thwarting or revoking these visas for foreigners could literally affect the outcome of American technology and science development for the next several decades. Katie Drummond: And particularly at a moment where, as you said, we're in this Cold War with China, we're in this AI arms race. You hear it from the administration, you read about it in WIRED, you hear about it from Sam Altman, other leaders of the AI industry, this like, "We must beat China. We must beat China." And then stuff like this happens and you feel like, "Let's just hand it to them. Let's just give it to them." Because we are basically doing that by disincentivizing not only Chinese students, but just brilliant people from all around the world, from coming here, bringing their intellect here, bringing their ideas here. We're basically telling them, "Go somewhere else. Maybe go to China." And something I did find fascinating in that reporting, Lauren, was that the vast majority of PhD students from China and India actually typically intend to stay in the US after they graduate. While the majority of people from other countries, places like Switzerland and Canada, report actually planning to leave, maybe they want to go back to their home country, maybe they want to go somewhere else, but it's rejecting the people who are most committed to staying here and to contributing to new technology in the United States is a certain kind of choice. And so other countries are already trying to take advantage of that. Hong Kong is already trying to attract Harvard students. The UK is setting up scholarships. There's a lot going on outside the United States in terms of basically trying to make the brain drain happen for us. Our loss is all of their gain. But when you put it in the context of this AI race and the US and China of it all, it feels like what we are doing is distinctly disadvantageous for us in this moment. Unless you both disagree and think I'm missing something. Lauren Goode: No, we always say on this podcast, it would be nice if we vehemently disagreed with each other because it would create tension. But I think in this case, we are all aligned on this. Michael Calore: Yeah. This scrutiny over foreign nationals, it doesn't just end at academia, of course. It also extends into the workforce here in the US and work visas. Lauren, you recently reported on how the process to obtain an H-1B visa has become more difficult recently. Can you tell us a little bit about what H-1B visas are and why they matter so much to the tech industry in particular? Lauren Goode: Sure, yeah. So H-1B visas are work visas that are granted for specialty occupations. They're typically valid for three years. They can be extended in some cases. This type of visa was first introduced in 1990 as part of a broader immigration act. And the idea is that it's supposed to help employers hire people with specialty skills that they might not otherwise get from the talent pool that already exists in the US. And the H-1B is a bit of a controversial visa. Even just saying, so you can hire people outside of the US because there are people who don't have that skillset here, naturally prompts the question for some people, "Wait, why are we not educating and training people in the US to have those jobs?" But basically what I was starting to hear from immigration attorneys who I was speaking to is that the requests for evidence, RFEs, had shot up since Trump took office in January of this year. Typically, when a person is applying or petitioning for an H-1B, their lawyer submits a bunch of paperwork on their behalf and that typically will include resumes, awards, letters of prestige, letters of recommendation from colleagues and friends and that sort of thing. You basically have to put together this packet to prove that you're worthy of this specialty visa. And then sometimes it would get bounced back and USCIS would ask for more requests for evidence. In this case, a lot of visa applications are being sent back. There are a lot more RFEs or requests for evidence for applicants. And that's something that four different immigration attorneys I spoke to said they're seeing happening. It's also not just happening across H-1B. There's another type of visa called the O-1 Extraordinary Ability visas. Once again, this is a specialty visa. A lot of tech entrepreneurs, engineers, and founders alike will come here under the O-1 visa and folks in that world are starting to say that they're getting pushback on their applications as well. All of this, it's instilling fear amongst some entrepreneurs and tech workers in the Valley, and it's creating a climate of uncertainty where people who seemed so committed and excited to come here and build their companies here and contribute to the technological environment here are now rethinking that because of what's going on with visa applications. Katie Drummond: Ugh. That is so bleak. 66% of people working in tech in Silicon Valley are born outside of the US. That is just an astonishing number to think about that being at risk. Lauren Goode: Yep. We're talking about the rank and file in a sense, but also just look at some of the CEOs- Katie Drummond: Yeah, look at the leadership. Lauren Goode: Of the companies we're talking about. Sundar Pichai and Satya Nadella, and I think the most... Should we talk about the most obvious one? Katie Drummond: I was going to say, just look at Elon Musk. Lauren Goode: Yes. Katie Drummond: What an international success story he is. Lauren Goode: Yes. Katie Drummond: What a success he has been for the United States of America. I will say, the H-1B visa program is not perfect. It's certainly been criticized for not being a fair system or a fair lottery, but despite the fact that this is an imperfect system, none of this actually feels like an approach to fix any of these problems or challenges, it's more just creating extra adversity and uncertainty around a process that's already very lengthy and very expensive. Michael Calore: So these challenges to the visa application have ramped up recently, but we're already seeing the effects of this, right? Lauren Goode: Yeah, this is something that's harder to quantify right now because these visa policies are just getting put in place. Everything's just changing. But I think we can qualify it by saying that the folks that we're talking to in Silicon Valley who are either here on a visa or they were hoping to stay on an extended visa or they were thinking of maybe coming here and we're working with attorneys to get that process started are now just reconsidering everything. You're already throwing yourself into a pretty uncertain world when you decide to launch a startup. You're choosing hard mode for yourself when you do that. So now throwing this uncertainty into the mix and thinking like, "Am I actually still going to be able to be here in three years if that's how long it takes me to actually make a product or build up a profitable business or raise my next funding round or something?" And if you can't see beyond that, I don't see how you realistically say like, "Oh, the US seems like a good bet right now." Katie Drummond: It just underscores how systemic and long-lasting this is going to be. Even if this were six months of bad federal policy and somehow the administration wakes up overnight and flips a switch and we see a lot of this pressure and additional scrutiny and adversity around immigration, around H-1B visas ease, there has already been so much damage done. We are going to feel this in this country for such a long time. Michael Calore: One of the thing about immigration policy that we have to talk about is something that our colleague David Gilbert has reported on for WIRED, and that is, as part of a reorganization of the State Department, the Trump administration is creating an office of remigration. And in very simple terms, remigration is an immigration policy embraced by extremists that calls for the removal of migrants including non-assimilated citizens. What do we make of this? Katie Drummond: So I talked a little bit earlier about being surprised that Marco Rubio announcing that enhanced social media scrutiny. I was surprised that that wasn't more of an outrage, that didn't get more coverage. This is even more extreme in that context, and it is a truly shocking development in this administration's war on anyone who is not a white American. That is basically what this is. I was shocked when I read this story last week and realized that this should be front page news for every news organization in the United States, and somehow it just wasn't. Lauren Goode: So the whole idea behind this is that they want to create a white ethnostate in this part of the world. Katie Drummond: That is our understanding of it, yeah. There is a long history to the idea of remigration and it really comes together through the lens of mega, it was present in the administration's first term as well. You had the Muslim ban, you had this idea of building a border wall, and I think what's so different this time from 2016, there's a lot that's different this time, I think big picture as we have seen, what's different is that this time the administration really means business. They're buttoned up, they're here to get the job done. And so it's the speed and the intensity at which these ideas, this very racist idea of remigration is going from just being something that's done in a scattershot way that is now showing up as a tactical specific policy proposal that is being released in official government documents. It's just a very different kind of approach and it feels much more real. It is much more real. And it's happening so quickly and amid I think so much other news that people are just not seeing that it's happening, and that's really scary. Lauren Goode: And what happens too I think is that there are all different kinds of immigration policies we're talking about here and if you're not paying close attention you might conflate them. There's a difference between the asylum process being shut down and the Aliens Enemy Act being overhauled with what may be going on with student and foreign visitor visas, Extraordinary Ability visas, which is different from what's being proposed with this remigration document. And a lot of it is happening under the guise of, "This is better for national security." There are of course going to be some instances in which that is true. For example, Stanford Review reported, I think it was a few weeks ago now, that they'd become aware of Chinese nationals actually trying to spy on Stanford University and its students. They'd purported to be other students. This sort of thing does happen, there are nations that are our adversaries that want to get information from the United States and wield it in nefarious ways, but for the most part, the Trump administration is putting immigrants in this giant bucket and creating this world in which they're all a threat to the United States. And that is absolutely not the case. Michael Calore: Yeah, these policies are going to obviously shape the culture of this country and they're going to shape the business that is done in this country. But of course, they are absolutely going to shape the technology industry. So let's take a break and when we come right back, we'll talk about the effects that these policies will have on tech. Welcome back to Uncanny Valley . We've been talking about the Trump administration's immigration policies and how they could shape the future of tech development in the us, and I'm curious to know how tech companies and workers have been reacting to these measures so far. Lauren Goode: I would say the number one thing I've heard directly from folks is that they are scaling back on their travel to conferences, whether they're academics or tech workers. And that may have a little bit more to do with what has been going on in some intermittent cases at the border, of people getting detained at the border. But also people are thinking about the status of their visa right now and whether they're an American citizen or they're here on a visa. Tech conferences and academic conferences are just a part of this world. Katie and I were just at one in Vancouver. And so if you have concerns about being let back into the United States after traveling, you may decline to go to one. And the same goes for universities. I think Brown University urged its international staff and students to postpone any plans to travel outside of the US out of an abundance of caution. Katie Drummond: It's interesting to think about the flip side of that because for most of the tech industry and the human beings who work in that industry, this is a very scary thing. It's affecting how they do their jobs, it's affecting whether or not they travel. And then you have the flip side of it, which is where there are certain parts of the tech industry who are really benefiting from these new policies. And I think Palantir is probably the best example of that. So Palantir is the brainchild of Peter Thiel, obviously a mega donor to the GOP party. And Palantir is really making it rain with the Trump administration, and they are benefiting tremendously from these policies and from DOGE efforts and administration efforts to centralize and unify data about American citizens and about immigrants to the administration. God knows what you could use all of that information for once it's centralized. Palantir recently won a $30 million no-bid contract to build ImmigrationOS, which essentially provides real-time data about the whereabouts of migrants and about deportations. Palantir obviously has worked with the US government for a very long time. They've had a contract with ICE since 2011, so that's almost 15 years ago. But we are really seeing the surveillance state that Palantir helps support grow exponentially and grow very quickly as a result of the administration's aims around immigration for one thing, but also just their aims to basically stand up and run an authoritarian state that would impact not only immigrants but US citizens as well. Michael Calore: So some tech companies are obviously seeing a paycheck opportunity in these immigration policies, but we can't say that the tech industry is operating as any kind of block, like they're not lockstep ideologically aligned with the immigration policies. And a lot of key tech leaders have been outspoken about the fact that they're not too happy with these policies, right? Lauren Goode: Yeah. It's honestly a little bit confusing. Someone like Elon Musk has in the past been in support of the H-1B. He employs more than 1,000 people on that type of visa. He even used it himself in his early years in the US, and he has in the past tweeted in support of immigrants being in Silicon Valley and contributing to the economy here. More recently though, he has called for a reform on it, and he's not alone in that. Same with Marc Andreessen, obviously one of the most vocal people, influential people in Silicon Valley. Surprisingly, they've got some interesting bedfellows. The Democrat Ro Khanna of California, Vermont's Bernie Sanders, they're also calling for a reform of the H-1B program. It goes back to what Katie was saying earlier, that there have been some critiques of H-1B. There's been a lot of backlash to the program, and it's hard to know sometimes whether it's coming from this kind of vitriolic or potentially racist place around how people feel about immigrants versus, "No, I'm actually in support of this because it's good for the US economy and the tech industry, but the process is broken." Katie Drummond: To me right now what we're looking at in the year 2025 is just part of this larger trend of tech leaders staying silent or muting their criticism or maybe posting something on X, but largely staying silent when it comes to politics, when it comes to political issues, at least publicly. We don't know what's happening behind the scenes, what kinds of lobbying efforts are going into trying to sway the administration one way or another when it comes to H-1B visas, when it comes to the importance of brilliant people from around the world being able to study and work in the United States and in the tech industry. But publicly for sure, we are not seeing that really robust resistance on the part of the tech industry. And that is certainly strategic because these guys know that this time the administration means business, they need to play ball, they need to work with this administration. And so we can only hope that behind the scenes there are more vigorous discussions happening than what we're seeing play out publicly. Michael Calore: It's distressing to me that the disconnect is so loud here because we really have to underscore how important of a positive role immigration has played in the growth of the tech industry. And in Silicon Valley in particular, like Lauren you were talking about earlier, some of the largest companies like Google and Microsoft have all had either founders or co-founders or CEOs who are first or second-generation immigrants. And if you look at a list right now of the country's current startups that are worth more than a billion dollars, more than half of them have an immigrant founder. Yeah. So the longterm stakes of keeping talented researchers and engineers and businesspeople out of the country seem deeply, deeply consequential. Lauren Goode: It's also just not a zero-sum game. If the tech industry continues to grow, presumably there would be enough room for having high-skilled American workers and high-skilled foreign nationals working together. Michael Calore: As it always has been. Okay, let's take another break and we'll come right back with recommendations. Thank you both for a great conversation. We are going to shift gears and talk about something completely different, which is our own personal desires and loves. We're going to do recommendations. Who wants to go first? Katie Drummond: My recommendations. It's been a busy time, so I feel like I'm a little bit limited on hobby activities, but a book I just finished that I do recommend, Barry Diller's memoir. If you're not familiar with Barry Diller, I believe he is now the chairman of IAC. But a long-time executive, invented the modern-day Hollywood approach to movie-making. It was great, so I highly recommend that. But my other recommendation is that last night I was thinking about what to have for dinner, and I made an omelet, and I haven't had an omelet in a while. The omelet had a red pepper, it had spinach, and it had shredded cheese, and it was just a really nice reminder if you're thinking about what to have for dinner tonight, a nice omelet, some toast with french butter, a can of seltzer, you might just be all set. That and a book. Michael Calore: Lauren, what is your recommendation? Lauren Goode: My recommendation is after you make your breakfast for dinner, you should check out the Brazilian film I'm Still Here. When I was flying home from Vancouver last week I started watching it on the plane and did not finish it. It was one of those things where I went home, unpacked, and then immediately bought the movie because I was like, "I need to finish watching it." Katie Drummond: Wow. Lauren Goode: And I loved it so much that I knew I wanted to own it. It's beautiful. It's beautifully done. It's based on a true story of a Brazilian congressman who is abducted during the military dictatorship. In Brazil that was at its peak in 1970, 1971. And really it's about his family too. It's about his wife, who's this incredibly strong woman in character, and their five children. And because it's the 1970s, the world is just different. Technology is limited, they have a family camcorder and that's really it. And the kids are just running around in their swimsuits all day long and things just feel simpler, but also complicated. And there are these scenes in the beginning where people are basically being rounded up by the military and you hear families having these conversations of, "Should we stay or should we go?" It's chilling, but it's a beautifully done film and so I highly recommend I'm Still Here. All right, what's your recommendation? Michael Calore: I'm here to tell the people to watch Mountainhead. This is a fiction film that feels closer. Lauren Goode: Just when I thought we were getting away from the tech bros. Michael Calore: It's a fiction film from Jesse Armstrong who is the creator of Succession. This is a movie that he did for HBO. We're just calling it HBO. Everybody deal with it. It's a bro fest. It's about four tech founders who gather at the Mountain retreat for a social weekend to catch up. There's a strict no deals policy, but of course that policy goes by the wayside as soon as things start happening. The four principal actors are Steve Carell, Jason Schwartzman, Cory Michael Smith, and Ramy Youssef. And if you liked the witty back and forth and the weird absurdist drama in Succession, there's plenty of that here. It's also very much of the moment because the backstory that happens during the film is that the world is embroiled in a bunch of political chaos because of AI DeepFakes on social media that are very inflammatory politically. Lauren Goode: Great. So also based on a true story is what you're saying. Michael Calore: Yeah. Katie Drummond: I do want to watch that. I would like to watch it. I will watch it. Michael Calore: It's not exactly a good time, but it is a rewarding time. Lauren Goode: I also will watch Mountainhead, but I'm actually wondering, and Katie, while we have you on the podcast, if I can just ask you, does that count as work? Because I interview those- Katie Drummond: No. Lauren Goode: Bros all the time, and so I can just take two hours during the day and watch that, right? It's work. Katie Drummond: Abso-fucking-lutely not. Lauren Goode: All right, we answered that. Katie Drummond: We sure did. Lauren Goode: Ooh. Michael Calore: Thanks for listening to Uncanny Valley . If you like what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, write to us at uncannyvalley@ Today's show is produced by Adriana Tapia and Kyana Moghadam. Amar Lal mixed this episode. Jake Lummus was our New York Studio engineer. Matt Giles fact-checked this episode. Jordan Bell is our executive producer. Katie Drummond is WIRED's global editorial director. And Chris Bannon is the head of Global Audio.

Wired's editor tells me how she got 62,000 new subscribers in two weeks
Wired's editor tells me how she got 62,000 new subscribers in two weeks

Yahoo

time28-05-2025

  • Business
  • Yahoo

Wired's editor tells me how she got 62,000 new subscribers in two weeks

News publishers weren't expecting a new "Trump Bump" in 2025 — they figured consumers had news fatigue. So how did Wired sign up 62,000 subscribers in two weeks in February? Katie Drummond, who took over Wired in 2023, explains. I write lots of depressing stories about the fate of media companies. Let's switch it up: Did you hear the one about the publisher who figured out how to find tens of thousands of new paying subscribers? That publisher is Wired, Condé Nast's tech site (and print magazine). And the strategy Wired used to find new subscribers is both super simple and very hard to pull off: Become a source for news lots of people want, and can't find anywhere else. That's the way Katie Drummond, who took over Wired in 2023, tells it. Drummond says she positioned Wired to specialize in breaking news — and then, when Donald Trump and Elon Musk joined forces after the 2024 election, she had plenty of news to break. On the one hand, that narrative makes plenty of sense. What Musk and his DOGE team tried to do to the federal bureaucracy was something we've never seen before. And Musk's chainsaw efforts affected millions of American workers and people who depend on those workers. So that's a big audience. On the other hand, lots of publications got a boost the first time Donald Trump was in office, for similar reasons. And the conventional wisdom was that it wouldn't happen again this time — news consumers were burned out on politics, and had already subscribed to everything they were going to subscribe to. So how did Drummond do it? You can hear my entire conversation with her on my Channels podcast — she's a great talker and well worth listening to in full. But you can get a sense of her strategy and tactics in this edited excerpt: Peter Kafka: In February, you guys said you'd added 62,500 subscribers in two weeks. At first I thought that number was a typo — publications just don't grow that fast. But apparently you really were. What happened? Katie Drummond: The answer is that our politics coverage — and specifically the coverage we started doing around the so-called Department of Government Efficiency and Elon Musk and his involvement in the Trump administration — drove colossal audiences to Wired. I've never seen anything like what we saw in February and March. That was where the subscription boom came from. Lots of publishers saw interest in Trump spike in 2016, 2017. But lots of wise people said media companies wouldn't see a "Trump Bump" this time around. Were you surprised to see that level of interest? We weren't expecting it. I've worked in digital media long enough to just always expect the worst, or just the status quo. Where the real surprise for me came from was that when we started covering DOGE, we started covering it really hard — like several stories a day, every single day, seven days a week, week after week. And after a week, I looked around, and was like, "where is everyone else? Why aren't other news organizations covering this?" I think that us having first-mover advantage on that story meant that for a lot of people, just out there in the world, trying to figure out what was going on, they saw Wired doing this coverage — and they looked at everybody else and sort of felt like, "where is the rest of the media on this?" A lot of the feedback we got from readers was "thank you so much for doing this coverage that nobody else seems willing to do. I'm now a subscriber." Did you feel that some of your new subscribers were doing something similar to people who subscribed to places like the Times in 2017? "You are fighting the good fight. I am signaling with my credit card that I like what you're doing. And I am against Donald Trump/Elon Musk." I think it was people looking for answers and trying to understand what is going on inside of these federal agencies: "This seems really wild and really troubling and really disturbing." Of course, we get anti-Trump sentiment in our inboxes. But it was less about anti-Trump and more, "Thank you for giving me information about what is happening inside the government of my own country. I appreciate that." On the one hand I can see why you guys would be positioned for this coverage: Elon Musk is a big tech guy; you're a tech publication. But Wired wasn't a place I would turn to to learn what's going on inside government agencies. How did you end up positioned for that? When I took the job in September 2023, I looked ahead at 2024. There was going to be a very consequential US federal election. There was also a record number of elections being held around the world. Elon Musk was not top of mind for me then. But generative AI was top of mind. Misinformation was top of mind. Everyone was worried we'd see replays of 2016, 2020, and that the platforms weren't going to be ready for it. Exactly. And the potential for more hacking and foreign interference in elections. It felt to me like, "There are so many different intersections with technology and with what we cover — we need to position ourselves now." I made a pitch to [Condé Nast] that I needed to build out a politics team. They were very receptive, very supportive. So by the end of 2023, we had that team in place. We started doing the coverage, and then midway through 2024, our focus changed when Trump was grazed in the ear by a bullet, Elon Musk endorsed him and it very quickly turned into a very different kind of story. One where we were able to bring a lot of expertise to bear around Elon Musk and the tech industry — how they think, how they operate. And with DOGE… I remember Zoe Schiffer, our director of business coverage — she wrote a book about what happened when Elon Musk bought Twitter. And she said: "This is going to be the Musk playbook — when he goes into a company, this is what he does. I think this is what we're about to see inside the federal government." So we positioned ourselves to cover it through that lens. Tesla stock is down and Elon Musk is much less visible than he was at the beginning of the year. DOGE doesn't seem to command the same kind of attention it used to. If people gave you money in February because they cared about DOGE, how do you keep them engaged in May and October? It's something that we think about and talk about all the time. The audience numbers on those stories now are not revolutionarily good — but they're still very good. And our mandate is to continue covering that as long as it is a consequential beat. We're going to stay on it. There will be more really big stories and really consequential stories to come out of what they are doing inside these agencies. But in terms of the community that we've built and all of those subscribers that we've added, now the challenge for us is to introduce them to the rest of Wired and what we have to offer. And to create new opportunities for them to really get to know Wired and get to know our journalists. So we're working on all sorts of things. We have been experimenting since late last year with livestream AMAs with Wired journalists, where subscribers can ask them questions. Thousands of people sign up and join those. This was the idea before the DOGE reporting really took off — to build Wired subscribers into more of a community and create less sort of a transactional back-and-forth. How is churn? I assume people who were signing up in February are more likely to stop subscribing than someone who's been with you for a while. Interestingly, our conversion rates are still way higher than they were last year and the year before. But our churn has gone way, way, way down. Among new subscribers, we're seeing churn rates that are vastly, vastly lower than what we were seeing in subscribers who signed up a year ago. Which is interesting. But again, it's only May. So we need to give that time. Sounds like you solved the whole thing. You solved publishing. I wake up every day assuming that I have not. Which I think is a pretty safe way to operate in 2025. Read the original article on Business Insider

How to save a magazine — with help from Donald Trump and Elon Musk
How to save a magazine — with help from Donald Trump and Elon Musk

Business Insider

time28-05-2025

  • Business
  • Business Insider

How to save a magazine — with help from Donald Trump and Elon Musk

I write lots of depressing stories about the fate of media companies. Let's switch it up: Did you hear the one about the publisher who figured out how to find tens of thousands of new, paying subscribers? That publisher is Wired, Condé Nast's tech site (and print magazine). And the strategy Wired used to find new subscribers is both super-simple and very hard to pull off: Become a source for news lots of people want, and can't find anywhere else. That's the way Katie Drummond, who took over Wired in 2023, tells it. Drummond says she positioned Wired to specialize in breaking news — and then, when Donald Trump and Elon Musk joined forces after the 2024 election, she had plenty of news to break. On the one hand, that narrative makes plenty of sense. What Musk and his DOGE team tried to do to the federal bureaucracy was something we've never seen before. And Musk's chainsaw efforts affected millions of American workers and people who depend on those workers. So that's a big audience. On the other hand, lots of publications got a boost the first time Donald Trump was in office, for similar reasons. And the conventional wisdom was that it wouldn't happen again this time — news consumers were burned out on politics, and had already subscribed to everything they were going to subscribe to. So how did Drummond do it? You can hear my entire conversation with her on my Channels podcast — she's a great talker and well worth listening to in full. But you can get a sense of her strategy and tactics in this edited excerpt: Peter Kafka: In February, you guys said you'd added 62,500 subscribers in two weeks. At first I thought that number was a typo — publications just don't grow that fast. But apparently you really were. What happened? Katie Drummond: The answer is that our politics coverage — and specifically the coverage we started doing around the so-called Department of Government Efficiency and Elon Musk and his involvement in the Trump administration — drove colossal audiences to Wired. I've never seen anything like what we saw in February and March. That was where the subscription boom came from. Lots of publishers saw interest in Trump spike in 2016, 2017. But lots of wise people said media companies wouldn't see a "Trump Bump" this time around. Were you surprised to see that level of interest? We weren't expecting it. I've worked in digital media long enough to just always expect the worst, or just the status quo. Where the real surprise for me came from was that when we started covering DOGE, we started covering it really hard — like several stories a day, every single day, seven days a week, week after week. And after a week, I looked around, and was like, "where is everyone else? Why aren't other news organizations covering this?" I think that us having first-mover advantage on that story meant that for a lot of people, just out there in the world, trying to figure out what was going on, they saw Wired doing this coverage — and they looked at everybody else and sort of felt like, "where is the rest of the media on this?" A lot of the feedback we got from readers was "thank you so much for doing this coverage that nobody else seems willing to do. I'm now a subscriber." Did you feel that some of your new subscribers were doing something similar to people who subscribed to places like the Times in 2017? "You are fighting the good fight. I am signaling with my credit card that I like what you're doing. And I am against Donald Trump/Elon Musk." I think it was people looking for answers and trying to understand what is going on inside of these federal agencies: "This seems really wild and really troubling and really disturbing." Of course, we get anti-Trump sentiment in our inboxes. But it was less about anti-Trump and more, "Thank you for giving me information about what is happening inside the government of my own country. I appreciate that." On the one hand I can see why you guys would be positioned for this coverage: Elon Musk is a big tech guy; you're a tech publication. But Wired wasn't a place I would turn to to learn what's going on inside government agencies. How did you end up positioned for that? When I took the job in September 2023, I looked ahead at 2024. There was going to be a very consequential US federal election. There was also a record number of elections being held around the world. Elon Musk was not top of mind for me then. But generative AI was top of mind. Misinformation was top of mind. Everyone was worried we'd see replays of 2016, 2020, and that the platforms weren't going to be ready for it. Exactly. And the potential for more hacking and foreign interference in elections. It felt to me like, "There are so many different intersections with technology and with what we cover — we need to position ourselves now." I made a pitch to [Condé Nast] that I needed to build out a politics team. They were very receptive, very supportive. So by the end of 2023, we had that team in place. We started doing the coverage, and then midway through 2024, our focus changed when Trump was grazed in the ear by a bullet, Elon Musk endorsed him and it very quickly turned into a very different kind of story. One where we were able to bring a lot of expertise to bear around Elon Musk and the tech industry — how they think, how they operate. And with DOGE… I remember Zoe Schiffer, our director of business coverage — she wrote a book about what happened when Elon Musk bought Twitter. And she said: "This is going to be the Musk playbook — when he goes into a company, this is what he does. I think this is what we're about to see inside the federal government." So we positioned ourselves to cover it through that lens. Tesla stock is down and Elon Musk is much less visible than he was at the beginning of the year. DOGE doesn't seem to command the same kind of attention it used to. If people gave you money in February because they cared about DOGE, how do you keep them engaged in May and October? It's something that we think about and talk about all the time. The audience numbers on those stories now are not revolutionarily good — but they're still very good. And our mandate is to continue covering that as long as it is a consequential beat. We're going to stay on it. There will be more really big stories and really consequential stories to come out of what they are doing inside these agencies. But in terms of the community that we've built and all of those subscribers that we've added, now the challenge for us is to introduce them to the rest of Wired and what we have to offer. And to create new opportunities for them to really get to know Wired and get to know our journalists. So we're working on all sorts of things. We have been experimenting since late last year with livestream AMAs with Wired journalists, where subscribers can ask them questions. Thousands of people sign up and join those. This was the idea before the DOGE reporting really took off — to build Wired subscribers into more of a community and create less sort of a transactional back-and-forth. How is churn? I assume people who were signing up in February are more likely to stop subscribing than someone who's been with you for a while. Interestingly, our conversion rates are still way higher than they were last year and the year before. But our churn has gone way, way, way down. Among new subscribers, we're seeing churn rates that are vastly, vastly lower than what we were seeing in subscribers who signed up a year ago. Which is interesting. But again, it's only May. So we need to give that time. Sounds like you solved the whole thing. You solved publishing. I wake up every day assuming that I have not. Which I think is a pretty safe way to operate in 2025.

Let's Talk About ChatGPT and Cheating in the Classroom
Let's Talk About ChatGPT and Cheating in the Classroom

WIRED

time23-05-2025

  • Entertainment
  • WIRED

Let's Talk About ChatGPT and Cheating in the Classroom

Photo-Illustration: WIRED Staff/Gety Images All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. There's been a lot of talk about how AI tools like ChatGPT are changing education. Students are using AI to do research, write papers, and get better grades. So today on the show, we debate whether using AI in school is actually cheating. Plus, we dive into how students and teachers are using these tools, and we ask what place AI should have in the future of learning. You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: Hey, this is Mike. Before we start, I want to take the chance to remind you that we want to hear from you. Do you have a tech-related question that's been on your mind or just a topic that you wish we talk about on the show? If so, you can write to us at uncannyvalley@ and if you listen to and enjoy our episodes, please rate it and leave a review on your podcast app of choice. It really helps other people find us. How's everybody doing? How you feeling this week? Katie Drummond: I'll tell you how I'm feeling. It's Katie here. My vibe levels are up. I'm feeling really good. I was at Columbia University earlier this week with five of our fantastic editors and reporters at WIRED because we were honored at the Columbia Journalism School this week for our politics reporting. And so we got dressed up, I gave a speech and it was so wonderful to have a minute to sit back and take a breath and think about all of the journalism we've done in the last several months and celebrate that. And it was also really, really cool to just see and talk to journalists who were graduating from journalism school and feel their energy and their excitement and their drive to do this work. Because I think, as you guys know, and you probably agree, we're all quite tired. Lauren, how are you? Lauren Goode: When you said, "Because we're tired." I wasn't sure if you meant we're just tired in this moment or we are existentially tired because I am a little tired in this moment, but I am not existentially tired. I'm here for the fight, Katie. Katie Drummond: Oh, I'm so glad to hear that. Lauren Goode: Yeah. Katie Drummond: Yeah, I'm tired in this moment. I just think it's so nice to spend some time with a couple hundred people who are new to this and just so excited to get down to business. It was very cool. Michael Calore: How much ChatGPT use is there at Columbia University in the journalism department, do we think? Lauren Goode: Good question, Mike. Katie Drummond: I really hope very little. Michael Calore: Me too. For the sake of us all. This is WIRED's Uncanny Valley , a show about the people power and influence of Silicon Valley, and today we are talking about how AI tools like ChatGPT are changing education from middle school to graduate school. More and more students are using generative chatbot tools to gather information, finish assignments faster and get better grades, and sometimes just write things for them. Just this month, there has been a ton of reporting and discourse on this trend, and some of it has been fairly optimistic, but a lot of it has also been critical as one user on X put it, "The kids are cooked." Lauren Goode: The kids are all right. Katie Drummond: Which X user was it? I can think of a few. I'm just curious. We don't actually know. Michael Calore: So on this episode, we're going to dive into how students are using ChatGPT, how professors are using it, whether we think this trend is, in fact, cheating when the students use it, and what AI's place could be in the future of learning. I'm Michael Calore, director of consumer tech and culture here at WIRED. Lauren Goode: I'm Lauren Goode. I'm a senior correspondent at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's global editorial director. Michael Calore: So before we dive into what has been happening with AI and students potentially using ChatGPT to cheat in their coursework, I want to have all of our cards on the table. Did either of you cheat in high school or in college? And if so, how? Katie Drummond: I feel like I should go first here because I'm the boss and I want to set Lauren up for success in her answer. I did not cheat in college. I was a very serious person in college. I was getting an undergraduate degree in philosophy, which felt like a very serious thing to be doing at the time. So I was totally above board. And also, as I was thinking about this earlier, this was in the early 2000s and it wasn't, I don't think, or wouldn't have been particularly easy to cheat at philosophy back then, whereas interestingly, it would be pretty easy to cheat at philosophy now. You're reading a lot. You're writing a lot of essays. It's hard to imagine how I would've effectively cheated, but I didn't cheat. I did cheat in high school though. Everybody cheated all the time. I'm not saying I cheated all the time. I'm not going to answer that question, but I did cheat. I specifically remember we had graphing calculators and we would program equations and answers into the calculators using special code so that teachers, if they went through our calculators, they wouldn't be able to tell that it was cheats. But we went to pretty great lengths to cheat on math exams, which is so stupid because I would've done great on the math exam regardless, but there was just something about being able to get away with it. Lauren Goode: Do you feel like a weight has been lifted from you now that you have confessed? Katie Drummond: No, I don't care. Look, I think that most students, at least in middle school and high school, dabble with cheating, and so I have no shame. What are they going to do? Strip me of my high school diploma. Good luck. Lauren Goode: Yeah, it's kind of a rite of passage. Katie Drummond: Exactly. Lauren Goode: I was very similar to Katie in that I did not cheat in college. In high school though, I remember reading Cliff's Notes for some book assignments. My best friend and I also did some light cheating in high school because the first initial of our last names wasn't that far apart, and it was a small school as well, so she was often sitting in front of me and I was directly behind her. And we had a tapping scheme where we'd tap our pencils during Scantron tests. Katie Drummond: Wow. Michael Calore: Oh, like sending secret messages to each other. Lauren Goode: Yeah, yeah. So if she was on question 13, she would sort of slide her Scantron to the side of the desk and so that you could see which number, which question number 13, and then the person who had the answer would tap their pencil a corresponding number of times to be like, answer A, answer B, answer C. Anyway, I don't want to implicate her. Totally. She's an adult now with a career and two grown children, and I'm not sure if the statute limitations has expired on this grand felony from Notre Dame Catholic High School. So maybe we can scrap that from the record. Thank you very much. Mike, did you cheat? Michael Calore: No, I was a total goody-goody, like super-duper do everything by the book Eagle Scout kind of kid. Didn't cheat in high school. I did encounter a course in college that I had a really hard time keeping up with. It was the 19th century British novel, and the reading list was absolutely brutal. It was one super long, boring book every week. And I mean, there was some good stuff in there, like Jane Eyre and Frankenstein. And then there were absolutely terrible books in there, like Barchester Towers and The Mayor of Casterbridge. So I learned the art of the shortcut. I would zoom in on one chapter and I would read the Cliff's Notes, and then I would read that chapter and I would be able to talk about that chapter in depth on a test. Katie Drummond: Oh, that's very smart. That's smart. But not cheating. Michael Calore: Not necessarily cheating. I don't consider Cliff's Notes to be cheating. I'm one of those people. Lauren Goode: Why not? Michael Calore: Well, because you're still actually doing the work and comprehending. And I think some of the examples that we're going to talk about don't even have that step in them. They just sort of skip over all the learning, Lauren Goode: Yeah, but you're not understanding the full context of where that author fits into a certain category of other writers. Katie Drummond: Lauren, I think that what you're trying to do right now is distract both us and our audience from your Scantron felony, when in fact, it seems like Mike is the most innocent party here. I just need to say. Lauren Goode: Fair enough. Michael Calore: At least I did the reading. All right, well we've all come clean. So thank you for all of that. And we can acknowledge that, of course, cheating is nothing new, but we're talking about it now. Because of the use of AI tools like ChatGPT by students and how it has exploded in recent years. It's become a topic of debate in both the tech and education spheres. So just to get a sense of the scale of how much students are using AI, one estimate by the Digital Education Council says that around 86% of students, globally, regularly use AI. During the first two years that ChatGPT was publicly available, monthly visits to ChatGPT steadily grew and then started to dip In June when school gets out. Katie Drummond: 86%. Michael Calore: 86%. So yeah, I've used AI in my school. Katie Drummond: That is an astonishing figure. Michael Calore: So the appeal of something like ChatGPT, if you've used it, you understand why it would be useful to students. The appeal of using it is pretty obvious. It can write, it can research, it can summarize, it can generate working code, but the central question remains. Is using ChatGPT in schoolwork cheating? Where do we draw the line here? Katie Drummond: So I don't think that there's a black and white answer, which is good for the length of this episode, but I think that that informs my overall view about AI and education, which is that this technology is here, you can't hide it, you can't make it go away. You can't prevent teenagers and young adults from accessing it. So you need to learn to live with it and evolve and set new rules and new guardrails. So in that context, I think there are a lot of uses of AI for students that I would not qualify as cheating. So getting back to the Cliff Notes debacle, I think using AI to summarize information, like say you're coming up with notes to help you study and you use AI to summarize information for you and come up with a study guide for you, I think that's a fantastic use of AI and that would actually just save you a lot of time and allow you to focus on the studying part instead of the transcription and all of that stuff. Or honestly to me, using it to compile research for you that you'll use to then write a paper, I think use cases like that are a natural evolution of technology and what it can help us do. I think for me, where AI becomes cheating is when you use AI to create a work product that was assigned and meant to come from you and now doesn't. But Lauren, I'm curious to hear what you think. Lauren Goode: Well, it would make for a really good podcast if I vehemently disagreed with you right now. I think we're pretty aligned on this. Earlier this week I happened to be at the Google I/O conference, which is their annual software conference, and it's a huge AI fest. It's an AI love fest. And so I had the opportunity to talk to a bunch of different executives and many of these conversations were off the record. But after we got through the round of like, "Okay, what's the latest thing you announced?" I just said, "How are you feeling about AI and education? What's your framework for thinking about this?" And one of the persons said, "Are you using it to replace the goal of the exercise?" And it's a blurry line, but it's, I think, a line to draw in terms of whether or not you're "cheating". So if you're going to ask that question, you first have to determine the goal and then you have to determine what the product is. The product of an education is not actually test scores or assignments. The product is, are you learning something from doing it? So if you're using AI to generate an output, it's understandable that you would say, "Does this output demonstrate cheating?" But the cheating actually happens during the generative part of generative AI. And once again, that's very fuzzy, but I think that if the goal of an assignment is not just turn this thing into your teacher's desk on Tuesday morning, goal of it is, did you learn something? And so if you're using AI to cheat through the learning part, which is like I think what we're going to be discussing, then yes, I guess that is cheating. Broadly, the use of these tools in education, just broadly speaking, doesn't scream cheating to me. Katie Drummond: I think that's a really interesting way of thinking about it actually. I like that a lot. Thank you person at Google. Michael Calore: Yeah. If the assignment is to write 600 words about the French Revolution, then that's obviously something that ChatGPT can do for you pretty easily. But if the assignment getting knowledge into your brain and then being able to relay it, then to prove that you've memorized it and internalized it and understand it, then I think there's a lot of things that ChatGPT and tools like it can do for you. Like you mentioned Katie, you can use it to summarize books, you can use it to help you with the research. One of the most ingenious uses that I've seen is people ask it to generate practice tests. They upload their whole textbook and they say, "I have a test on Friday on chapters four and six, can you generate five practice tests for me to take?" And then that helps them understand what sort of questions they would be getting and what kinds of things keep popping up in all of those practice tests, those things are probably the most important things to learn. So let me quickly share a real world example of AI cheating to see what you think about it. The most infamous case perhaps comes from a recent New York Magazine story about students using ChatGPT for their coursework. The story starts off with Chungin Roy Lee, a former Columbia student who created a generative AI app explicitly to cheat on his computer science schoolwork. He even ended up using it in job interviews with major tech companies. He scored an internship with Amazon after using his AI helper during the interview for the job. He declined to take that job, by the way. So that's pretty ingenious. He's coding an app. He's using generative AI to make an app to help him cheat on things and get jobs. Do you think that the "ingenuity" behind building something like this is cheating? Do we think that his creation of this AI tool carries any merit? Lauren Goode: I mean, it's so clearly cheating because the intent is to cheat. If we go back to that question of, are you using it to replace the goal of what you're trying to do? His goal is cheating. His goal is like, "Look how clever I am and then I'm cheating." Lee strikes me as the irritant in the room. What it's doing is bubbling to the surface, a lightning rod topic that is much bigger than this one specific app. Katie Drummond: Well, and he, in April of this year, something I thought was interesting just in terms of he's the irritant, but how many complicit irritants does he have on his team? Lee and a business partner raised $5.3 million to launch an app that scans your computer screen, listens to the audio and then gives AI generated feedback and answers to questions in real time. And my question when I read that was, "Who are these investors? Who are these people?" The website for this company says, "We want to cheat on everything." And someone was like, "Yes, I am writing a check." Of course it's cheating. They say that it's cheating. I mean, I appreciate the creativity. It's always interesting to see what people dream up with regards to AI and what they can create. But using AI to ace a job interview in real time, not to practice for the job interview beforehand, but to, in real time, answer the interviewer's questions, like you're setting yourself up and your career up for failure. If you get the job, you do need to have some degree of competence to actually perform the job effectively. And then I think something else that I'm sure we'll talk about throughout this show is it's the erosion of skill. It's knowing how to think on your feet or answer tough questions or engage with a stranger, make small talk. There are all of these life skills that I worry we're losing when we start to use tools like the tools that Lee has developed. And so of course I think there are interesting potential use cases for AI like interview prep or practice is an interesting way to use that technology. So again, it's not about the fact that AI exists and that it's being used in the context of education or a job interview, but it's about how we're using it. And certainly in this case it's about the intent. Is someone who is developing these tools specifically with the intention of using them and marketing them for cheating? And I don't like that. I don't like a cheater, other than when I cheated in high school. Michael Calore: Well, we've been talking a lot about ChatGPT so far and for good reason because it's the most popular of the generative AI tools that students are using, but there are other AI tools that they can use to help with their coursework or even just do their schoolwork for them. What are some of the other ones that are out there? Lauren Goode: I think you can literally take any of these AI products that we write about every day in WIRED, whether it's ChatGPT, whether it's Anthropic's Claud, whether it's Google Gemini or the Perplexity AI search engine, Gamma for putting together fancy decks. All of these tools, they're also sort of highly specialized AI tools like Wolfram or MathGPT, which are both math focused models. And you can see folks talking about that on Reddit. Katie Drummond: Something interesting to me too, is that there are now also tools that basically make AI detectors pretty useless. So there are tools that can make AI generated writing sound more human and more natural. So you basically would have ChatGPT, write your paper, then run it through an additional platform to finesse the writing, which helps get that writing around any sort of AI detection software that your professor might be using. Some students have one LLM write a paper or an answer, and then they sort of run it through a few more to basically make sure that nothing can show up or nothing can be detected using AI detection software. Or students, I think too, are getting smarter about the prompts they use. So there was a great anecdote in this New York Magazine story about asking the LLM to make you sound like a college student who's kind of dumb, which is amazing. It's like maybe you don't need the A plus, maybe you're okay getting the C plus or the B minus. And so you set the expectations low, which reduces your risk, in theory, of getting caught cheating. Michael Calore: And you can train a chatbot to sound like you. Katie Drummond: Yes. Yeah. Michael Calore: To sound actually like you. One of the big innovations that's come up over the last year is a memory feature, especially if you have a paid subscription to a chatbot, you can upload all kinds of information to it in order to teach it about you. So you can give it papers, you can give it speeches, YouTube videos of you speaking so it understands the words that you'd like to use. It understands your voice as a human being. And then you can say, "Write this paper in my voice." And it will do that. It obviously won't be perfect, but it'll get a lot closer to sounding human. So I think we should also talk about some of the tools that are not necessarily straight chatbot tools that are AI tools. One of them is called Studdy, which is study with two Ds, which I'm sure the irony is not lost on any of us that they misspelled study in the name, but it's basically an AI tutor. You download the app and you take a picture of your homework and it acts like a tutor. It walks through the problem and helps you solve it, and it doesn't necessarily give you the answer, but it gives you all of the tools that you need in order to come up with the answer on your own. And it can give you very, very obvious hints as to what the answer could be. There's another tool out there called Chegg, C-H-E-G-G. Katie Drummond: These names are horrific, by the way. Just memo to Chegg and Studdy, you have some work to do. You both have some work to do. Lauren Goode: Chegg has been around for a while, right? Katie Drummond: It's a bad name. Lauren Goode: Yeah. Michael Calore: It has been, it's been very popular for a while. One of the reasons it's popular is the writing assistant. Basically you upload your paper and it checks it for grammar and syntax and it just helps you sound smarter. It also checks it for plagiarism, which is kind of amazing because if you're plagiarizing, it'll just help you not get caught plagiarizing and it can help you cite research. If you need to have a certain number of citations in a paper, oftentimes professors will say, "I want to see five sources cited." You just plug in URLs and it just generates citations for you. So it really makes that easy. Katie Drummond: I mean, I will say there are some parts of what you just described that I love. I love the idea of every student, no matter what school they go to, where in the country they live, what their socioeconomic circumstances are, that they would have access to one-on-one tutoring to help support them as they're doing their homework, wherever they're doing it, whatever kind of parental support they do or don't have. I think that that's incredible. I think the idea of making citations less of a pain in the ass is like, yeah, that sounds good. Not such a huge fan of helping you plagiarize, right? But it's again, it's like this dynamic with AI in education where not all good, not all bad. I've talked to educators and the impression I have gotten, and again, this is just anecdotal, but there is so much fear and resistance and reluctance and this feeling among faculty of being so overwhelmed by, "We have this massive problem, what are we going to do about it?" And I just think that too often people get caught up in the massive problem part of it and aren't thinking enough about the opportunities. Michael Calore: Of course, it's not just students who are using AI tools in the classroom, teachers are doing it too. In an article for The Free Press, an economics professor at George Mason University says that he uses the latest version of ChatGPT to give feedback on his PhD student's papers. So kudos to him. Also, The New York Times recently reported that in a national survey of more than 1800 higher education instructors last year, 18% of them described themselves as frequent users of generative AI tools. This year, that percentage has nearly doubled. How do we feel about professors using generative AI chatbots to grade their PhD students papers? Lauren Goode: So I have what may be a controversial opinion on this one, which is just give teachers all the tools. Broadly speaking, I don't think it is wrong for teachers to use the tools at their disposal, provided it aligns with what their school system or university policies say if it is going to make their lives easier and help them to teach better. So there was another story in The New York Times written by Kashmir Hill that was about a woman at Northeastern University who caught her professor using ChatGPT to prepare lecture notes because of some string of a prompt that he accidentally left in the output for the lecture notes. And she basically wanted her $8,000 back for that semester because she was thinking that, "I'm paying so much money to go here and my teacher is using ChatGPT." It currently costs $65,000 per year to go to Northeastern University in Boston. That's higher than the average for ranked private colleges in the US, but it's all still very expensive. So for that price, you're just hoping that your professors will saw off the top of your head and dump all the knowledge in that you need, and then you'll enter the workforce and nab that six-figure job right off the gate. But that's not how that works, and that is not your professor's fault. At the same time, we ask so much of teachers. At the university level, most are underpaid. It is increasingly difficult to get a tenure-track position. Below the university level, teachers are far outnumbered by students. They're dealing with burnout from the pandemic. They were dealing with burnout before then, and funding for public schools has been on the decline at the state level for years because fewer people are choosing to send their kids to public schools. Katie Drummond: I mean, I totally agree with you in terms of one group of people in this scenario are subject matter experts, and one group of people in this scenario are not. They are learning a subject. They are learning how to behave and how to succeed in the world. So I think it's a mistake to conflate or compare students using AI with teachers using AI. I think that what a lot of students, particularly at a university level, are looking for from a professor is that human-to-human interaction, human feedback, human contact. They want to have a back-and-forth dialogue with their educator when they're at that academic level. And so if I wrote a paper and my professor used AI to read the paper and then grade the paper, I would obviously be very upset to know that that feels like cheating at your job as a professor. And I think cheating the student out of that human-to-human interaction, that, ostensibly, they are paying for access to these professors, they're not paying for access to an LLM. Lauren Goode: Lesson plan, yeah. Katie Drummond: But for me, when I think about AI as an efficiency tool for educators, so should a professor use AI to translate a written syllabus into a deck that they can present to the classroom for students who are maybe better visual learners than they are written learners? Obviously. That's an amazing thing to be able to do. You could create podcast versions of your curriculum so that students who have that kind of aptitude can learn through their ears. You know what I mean? There are so many different things that professors can do to create more dynamic learning experiences for students, and also to save themselves a lot of time. And none of that offends me, all of that actually, I think is a very positive and productive development for educators. Michael Calore: Yeah, I mean essentially what you're talking about is people using AI tools to do their jobs in a way that's more efficient. Katie Drummond: Right, which is sort of what the whole promise of AI in theory, in a best-case scenario, that's what we're hoping for. Lauren Goode: What it's supposed to be. Yeah. Katie Drummond: Yeah. Michael Calore: Honestly, some of these use cases that we're talking about that we agree are acceptable, are much the same way that generative AI tools are being used in the corporate world. People are using AI tools to generate decks. They're using them to generate podcasts so that they can understand things that they need to do for their job. They're using them to write emails, take meeting notes, all kinds of things that are very similar to the way that professors are using it. I would like to ask one more question before we take a break, and I want to know if we can identify some of the factors or conditions that we think have contributed to this increasing reliance on AI tools by students and professors. They feel slightly different because the use cases are slightly different. Katie Drummond: I think that Lauren had a really good point about teachers being underpaid and overworked. So I think the desire for some support via technology and some efficiency in the context of educators, I think that that makes total sense as a factor. But when I think about this big picture, I don't really think that there is a specific factor or condition here other than just the evolution of technology. The sometimes slow, but often very fast march of technological progress. And students have always used new technology to learn differently, to accelerate their ability to do schoolwork and yes, to cheat. So now AI is out there in the world, it's been commercialized, it's readily available, and they're using it. Of course they are. So I will acknowledge though that AI is an exponential leap, I think, in terms of how disruptive it is for education compared to something like a graphing calculator or Google search. But I don't think there is necessarily some new and novel factor other than the fact that the technology exists and that these are students in this generation who were raised with smartphones and smart watches and readily accessible information in the palms of their hands. And so I think for them, AI just feels like a very natural next step. And I think that's part of the disconnect. Whereas for teachers in their thirties or forties or fifties or sixties, AI feels much less natural, and therefore the idea that their students are using this technology is a much more nefarious and overwhelming phenomenon. Michael Calore: That's a great point, and I think we can talk about that forward march of technology when we come back. But for now, let's take a break. Welcome back to Uncanny Valley . So let's take a step back for a second and talk about that slow march of technology and how various technologies have shaped the classroom in our lifetimes. So the calculator first made its appearance in the 1970s. Of course, critics were up in arms. They feared that students would no longer be able to do basic math without the assistance of a little computer on their desk. The same thing happened with the internet when it really flowered and came into being in the late 90s and early 2000s. So how is this emergence of generative AI any similar or different than the arrival of any of these other technologies? Lauren Goode: I think the calculator is a false equivalence. And let me tell you, there is nothing more fun than being at a tech conference where there's a bunch of Googler PhDs when you ask this question too. And they go, "But the calculator." Everyone's so excited about the calculator, which is great, an amazing piece of technology. But I think it's normal that when new technology comes out, our minds tend to reach for these previous examples that we now understand. It's the calculator, but a calculator is different. A standard calculator is deterministic. It gives you a true answer, one plus one equals two. The way that these AI models work is that they are not deterministic. They're probabilistic. The type of AI we're talking about is also generative or originative. It produces entirely new content. A calculator doesn't do that. So I think if you sort of broadly categorize them all as new tools that are changing the world, yes, absolutely tech is a tool, but I think that generative AI, I think it's in a different category from this. I was in college in the early 2000s when people were starting to use Google, and you're sort of retrieving entirely new sets of information in a way that's different from using a calculator, but different from using ChatGPT. And I think if you were to use that as the comparison, and the question is, is skipping all of those processes that you typically learn something doing, the critical part? Does that make sense? Katie Drummond: That makes sense. And this is so interesting because when I was thinking about this question and listening to your answer, I was thinking about it more in that way of thinking about the calculator, thinking about the advent of the internet and search, comparing them to AI. Where my brain went was what skills were lost with the advent of these new technologies and which of those was real and serious and maybe which one wasn't. And so when I think about the calculator, to me that felt like a more salient example vis-a-vis AI because the advent of the calculator, are we all dumber at doing math on paper because we can use calculators? Michael Calore: Yes. Katie Drummond: For sure. Lauren Goode: Totally, one hundred percent. Katie Drummond: For sure. You think I can multiply two or three numbers? Oh no, my friend, you are so wrong. I keep tabs on my weekly running mileage, and I will use a calculator to be like, seven plus eight plus 6.25 plus five. That's how I use my calculator. So has that skill atrophied as a result of this technology being available? 100%. When I think about search and the internet, I'm not saying there hasn't been atrophy of human skill there, but that to me felt more like a widening of the aperture in terms of our access to information. But it doesn't feel like this technological phenomenon where you are losing vital brain-based skills, the way a calculator feels that way. And to me, AI feels that way. It's almost like when something is programmed or programmable, that's also where I feel like you start to lose your edge. Now that we program phone numbers into our phones, we don't know any phone numbers by heart. I know my phone number, I know my husband's phone number. I don't know anyone else's phone number. Maybe Lauren, maybe you're right. It's this false equivalence where you can't draw any meaningful conclusion from any one new piece of technology. And AI again, I think is just exponentially on this different scale in terms of disruption. But are we all bad at math? Yes, we are. Michael Calore: Yeah. Lauren Goode: Well, I guess I wonder, and I do still maintain that it's kind of a false equivalence to the calculator, but there were some teachers, I'm sure we all had them, who would say, Fine, use your calculator, bring it to class." Or, "We know you're using it at home for your homework at night, but you have to show your work." What's the version of show your work when ChatGPT is writing an entire essay for you? Michael Calore: There isn't one. Katie Drummond: Yeah, I mean, I think some professors have had students submit chat logs with their LLMs to show how they use the LLM to generate a work product, but that starts from the foundational premise that ChatGPT or AI is integrated into that classroom. I think if you're just using it to generate the paper and lying about it, you're not showing your work. But I think some professors who maybe are more at the leading edge of how we're using this technology have tried to introduce AI in a way that then allows them to keep tabs on how students are actually interacting with it. Lauren Goode: Mike, what do you think? Do you it's like the calculator or Google or anything else you can think of? Michael Calore: Well, so I started college in 1992, and then while I was at college, the web browser came around and I graduated from college in 1996. So I saw the internet come into being while I was in the halls of academia. And I actually had professors who were lamenting the fact that when they were assigning us work, we were not going to the library and using the card catalog to look up the answers to the questions that we were being asked in the various texts that were available in the library. Because all of a sudden we basically had the library in a box in our dorm rooms and we could just do it there. I think that's fantastic. Katie Drummond: Yes. Michael Calore: I think having access at your fingertips to literally the knowledge of the world is an amazing thing. Of course, the professor who had that view also thought that the Beatles ruined rock and roll and loved debating us about it after class. But I do think that when we think about using ChatGPT and whether or not it's cheating, like yes, absolutely, it's cheating if you use it in the ways that we've defined, but it's not going anywhere. And when we talk about these things becoming more prevalent in schools, our immediate instinct is like, "Okay, well how do we stop it? How do we contain it? Maybe we should ban it." But it really is not going anywhere. So I feel like there may be a missed opportunity right now to actually have conversations about how we can make academia work better for students and faculty. How are we all sitting with this? Lauren Goode: I mean, banning it isn't going to work, right? Do we agree with that? Is the toothpaste out of the tube? Katie Drummond: Yes, I think- Lauren Goode: And you could be a school district and ban it and the kids are going to go, "Haha, Haha, Ha." Michael Calore: Yeah. Katie Drummond: I mean that's a ridiculous idea to even... Lauren Goode: Right. Katie Drummond: If you run a school district out there in the United States, don't even think about it. Lauren Goode: Right. And what's challenging about the AI detection tools that some people use, they're often wrong. So I think, I don't know, I think we all have to come to some kind of agreement around what cheating is and what the intent of an educational exercise is in order to define what this new era of cheating is. So a version of that conversation that has to happen for all these different levels of society to say, "What is acceptable here? What are we getting from this? What are we learning from this? Is this bettering my experience as a participant in society?" Katie Drummond: And I think ideally from there, it's sort of, "Okay, we have the guardrails. We all agree what cheating is in this context of AI." And then it's about how do we use this technology for good? How do we use it for the benefit of teachers and the benefit of students? What is the best way forward there? And there are some really interesting thinkers out there who are already talking about this and already doing this. So Chris Ostro is a professor at the University of Colorado at Boulder, and they recommend actually teaching incoming college students about AI literacy and AI ethics. So the idea being that when students come in for their first year of college that we need to actually teach them about how and where AI should be used and where it shouldn't. When you say it out loud, you're like, "That's a very reasonable and rational idea. Obviously we should be doing that." Because I think for some students too, they're not even aware of the fact that maybe this use of AI is cheating, but this use of AI is something that their professor thinks is above board and really productive. And then there are professors who are doing, I think, really interesting things with AI in the context of education in the classroom. So they'll have AI generate an essay or an argument, and then they will have groups of students evaluate that argument, basically deconstruct it and critique it. So that's interesting to me because I think that's working a lot of those same muscles. It's the critical thinking, the analysis, the communication skills, but it's doing it in a different way than asking students to go home and write a paper or go home and write a response to that argument. The idea being, "No, don't let them do it at home because if they go home, they'll cheat." It's an interesting evolution of, I think, Lauren, to the point that you've brought up repeatedly that I think is totally right is thinking about what is the goal here, and then given that AI is now standard practice among students, how do we get to the goal in a new way? Michael Calore: Yeah, and we have to figure out what we're going to do as a society with this problem because the stakes are really, really high. We are facing a possible future where there's going to be millions of people graduating from high school and college who are possibly functionally illiterate because they never learned how to string three words together. Katie Drummond: And I have a second grader, so if we could figure this out in the next 10 years, that would be much appreciated. Lauren Goode: So she's not using generative AI at this point? Katie Drummond: Well, no, she's not. Certainly not. She gets a homework packet and she loves to come home and sit down. I mean, she's a real nerd. I love her, but she loves to come home and sit down and do her homework with her pencil. But my husband is a real AI booster. We were playing Scrabble a couple of months ago, adult Scrabble with her. She's seven, Scrabble is for ages eight and up, and she was really frustrated because we were kicking her ass, and so he let her use ChatGPT on his computer and she could actually take a photo of the Scrabble board and share her letters. Like, "These are the letters that I have, what words can I make?" And I was like, "That's cheating." And then honestly, as we kept playing, it was cool because she was discovering all of these words that she had never heard of before and so she was learning how to pronounce them. She was asking us what they meant. My thinking about it softened as I watched her using it. But no, it's not something that is part of her day to day. She loves doing her homework and I want her to love doing her homework until high school when she'll start cheating like her mother. Michael Calore: This is actually a really good segue into the last thing that I want to talk about before we take another break, which is the things that we can do in order to make these tools more useful in the classroom. So thought exercise, if you ran a major university or if you're in the Department of Education before you lose your job, what would you be doing over your summer break coming up in order to get your institutions under your stead ready for the fall semester? Katie Drummond: I love this question. I have a roadmap. I'm ready. I love this idea of AI ethics, so I would be scouring my network, I would be hiring a professor to teach that entry level AI ethics class, and then I would be asking each of my department heads because every realm of education within a given college is very different. If you have someone who runs the math department, they need to think about AI very differently than whoever runs the English department. So I would be asking each of my department leads to write AI guidelines for their faculty and their teachers. You can tell I'm very excited about my roadmap. Michael Calore: Oh yes. Katie Drummond: I would then review all of those guidelines by department, sign off on them, and also make sure that they laddered up to a big picture, institutional point of view on AI. Because obviously it's important that everyone is marching to the beat of the same drum, that you don't have sort of wildly divergent points of view within one given institution. Lauren Goode: What do you think your high level policy on AI would be right now if you had to say? Katie Drummond: I think it would really be that so much of this is about communication between teachers and students, that teachers need to be very clear with students about what is and is not acceptable, what is cheating, what is not cheating, and then they need to design a curriculum that incorporates more, I would say, AI friendly assignments and work products into their education plan. Because again, what I keep coming back to is, you can't send a student home with an essay assignment anymore. Lauren Goode: No, you can't. Katie Drummond: You can't do that. So it comes down to, what are you to do instead? Lauren Goode: I like it. Katie Drummond: Thank you. What would you do? Lauren Goode: I would enroll at Drummond. Drummond, that actually sounds like a college. Where did you go to school? Drummond. Michael Calore: It does. Lauren Goode: Well, I was going to say something else, but Katie, now that you said you might be hiring an ethics professor, I think I'm going to apply for that job, and I have this idea for what I would do as an ethics professor teaching AI to students right now. On the first day of class, I would bring in a couple groups of students. Group A would have to write an essay there on the spot and group B presumably were doing it, but actually they weren't. They were just stealing group A's work and repurposing it as their own. And I haven't quite figured out all the mechanics of this yet, but basically I would use as an example for here's what it feels like when you use ChatGPT to generate an essay because you're stealing some unknown person's work, essentially cut up into bits and pieces and repurposing it as your own. Katie Drummond: Very intense, Lauren. Lauren Goode: I would start off the classroom fighting with each other, basically. Katie Drummond: Seriously? Michael Calore: It's good illustration. I would say that if I was running a university, I would create a disciplinary balance in the curriculum across all of the departments. You want to make sure that people have a good multi-disciplinary view of whatever it is that they're studying. So what I mean is that some percentage of your grade is based on an oral exam or a discussion group or a blue book essay, and some other percentage is based on research papers and tests and other kinds of traditional coursework. So, I think there has to be some part of your final grade that are things that you cannot use AI for. Learning how to communicate, how to work in teams, sitting in a circle and talking through diverse viewpoints in order to understand an issue or solve a problem from multiple different angles. This is how part of my college education worked, and in those courses where we did that, where one third of our grade was based on a discussion group, it was one class during the week was devoted to sitting around and talking. I learned so much in those classes, and not only about other people, but also about the material. The discussions that we had about the material were not places that my brain would've normally gone. So yeah, that's what I would do. I think that's the thing that we would be losing if we all just continued to type into chatbots all the time. There are brilliant minds out there that need to be unleashed, and the only way to unleash them is to not have them staring at a screen. Lauren Goode: Mike's solution is touch some grass. I'm here for it. Michael Calore: Sit in a circle, everybody. Okay, let's take one more break and then we'll come right back. Welcome back to Uncanny Valley . Thank you both for a great conversation about AI and school and cheating, and thank you for sharing your stories. Before we go, we have to do real quick recommendations. Lightning round. Lauren, what is your recommendation? Lauren Goode: Ooh. I recommended flowers last time, so... Katie Drummond: We are going from strength to strength here at Uncanny Valley . Lauren Goode: My recommendation for flowers has not changed for what it's worth. Hood River, Oregon. That's my recommendation. Michael Calore: That's your recommendation. Did you go there recently? Lauren Goode: Yeah, I did. I went to Hood River recently and I had a blast. It's right on the Columbia River. It's a beautiful area. I you are a Twilight fan, it turns out that the first Twilight movie, much of it was filmed right where we were. We happened to watch Twilight during that time just for kicks. Forgot how bad that movie was, but every time the River Valley showed up on screen, we shouted, "Gorge." Because we were in the gorge. I loved Hood River. It was lovely. Michael Calore: That's pretty good. Katie? Katie Drummond: My recommendation is very specific and very strange. It is a 2003 film called What a Girl Wants, starring Amanda Bynes and Colin Firth. Michael Calore: Wow. Katie Drummond: I watched this movie in high school, where I was cheating on my math exams. Sorry. For some reason, just the memory of me cheating on my high school math exams makes me laugh, and then I rewatched it with my daughter this weekend, and it's so bad and so ludicrous and just so fabulous. Colin Firth is a babe. Amanda Bynes is amazing, and I wish her the best. And it's a very fun, stupid movie if you want to just disconnect your brain and learn about the story of a seventeen-year-old girl who goes to the United Kingdom to meet the father she never knew. Michael Calore: Wow. Lauren Goode: Wow. Katie Drummond: Thank you. It's really good. Lauren Goode: I can't decide if you're saying it's good or it's terrible. Katie Drummond: It's both. You know what I mean? Lauren Goode: It's some combination of both. Katie Drummond: It's so bad. She falls in love with a bad boy with a motorcycle, but a heart of gold who also happens to sing in the band that plays in UK Parliament, so he just happens to be around all the time. He has spiky hair. Remember 2003? All the guys had gel, spiky hair. Lauren Goode: Yes, I still remember that. Early 2000s movies, boy, did they not age well. Katie Drummond: This one though, aged like a fine wine. Michael Calore: That's great. Katie Drummond: It's excellent. Lauren Goode: It's great. Katie Drummond: Mike, what do you recommend? Lauren Goode: Yeah. Michael Calore: Can I go the exact opposite? Katie Drummond: Please, someone. Yeah. Michael Calore: I'm going to go literary. Katie Drummond: Okay. Michael Calore: And I'm going to recommend a novel that I read recently that it just shook me to my core. It's by Elena Ferrante, and it is called The Days of Abandonment. It's a novel written in Italian, translated into English and many other languages by the great pseudonymous novelist, Elena Ferrante. And it is about a woman who wakes up one day and finds out that her husband is leaving her and she doesn't know why and she doesn't know where he's going or who he's going with, but he just disappears from her life and she goes through it. She accidentally locks herself in her apartment. She has two children that she is now all of a sudden trying to take care of, but somehow neglecting because she's- Katie Drummond: This is terrible. Michael Calore: But it's the way that it's written is really good. It is a really heavy book. It's rough, it's really rough subject matter wise, but the writing is just incredible, and it's not a long book, so you don't have to sit and suffer with her for a great deal of time. I won't spoil anything, but I will say that there is some resolution in it. It's not a straight trip down to hell. It is a, really, just lovely observation of how human beings process grief and how human beings deal with crises, and I really loved it. Katie Drummond: Wow. Michael Calore: I kind of want to read it again, even though it was difficult to get through the first time. Katie Drummond: Just a reminder to everyone, Mike was the one who didn't cheat in high school or college, which that totally tracks from the beginning of the episode to the end. Michael Calore: Thank you for the reminder. Katie Drummond: Yeah. Michael Calore: All right, well, thank you for those recommendations. Those were great, and thank you all for listening to Uncanny Valley . If you liked what you heard today, make sure to follow our show and to rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, write to us at uncannyvalley@ We're going to be taking a break next week, but we will be back the week after that. Today's show is produced by Adriana Tapia and Kiana Mogadam. Greg Obis mixed this episode. Jake Loomis was our New York studio engineer, Daniel Roman fact-checked this episode. Jordan Bell is our executive producer. Katie Drummond is WIRED's global editorial director, and Chris Bannon is the head of Global Audio.

Is Elon Musk Really Stepping Back From DOGE?
Is Elon Musk Really Stepping Back From DOGE?

WIRED

time17-05-2025

  • Business
  • WIRED

Is Elon Musk Really Stepping Back From DOGE?

Elon Musk is apparently turning his attention away from Washington and back to Tesla. On this episode of Uncanny Valley , the hosts unpack what Musk's pivot means for the future of DOGE. Elon Musk arrives for a town hall meeting wearing a cheesehead hat at the KI Convention Center on March 30 in Green Bay, Wisconsin. Photo-Illustration: WIRED Staff; Photograph:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Elon Musk says he's stepping back from his role with the so-called Department of Government Efficiency to turn his attention to his businesses—most urgently to Tesla, which has faced global sales slumps in recent months. In this episode, we discuss how our understanding of DOGE has evolved over the past five months and what we think will happen when Musk scales back. You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: Hey, this is Mike. Before we start, I want to take the chance to remind you that we want to hear from you. Do you have a tech related question that's been on your mind, or maybe you have a topic that you wish we talk about on the show? If so, you can write to us at uncannyvalley@ and if you listen to and enjoy our episodes, please rate it and leave your review on your podcast app of choice. It really helps other people find us. Hi folks, co-hosts. How's it going? Katie Drummond: Ugh. Michael Calore: That good? Katie Drummond: That was me, Katie. That was me speaking. No, it's going all right. It's been a stressful 90 minutes leading up to recording this podcast, but I'm okay. Michael Calore: Did you just fly through Newark? Katie Drummond: No, actually I didn't. Although I know that that is in your cards, in the near future. I actually rescheduled a flight to avoid Newark, so I'm now taking a red eye for no reason other than I don't want to fly into Newark Airport. Lauren Goode: Smart. Katie Drummond: Thank you. Michael Calore: I'm jealous. Lauren Goode: Mike, I'm sending you all of the good wishes. Michael Calore: Thank you. I hope to listen to this podcast on an airplane that took off on time and lands on time without incident on Thursday. Lauren Goode: I hope you return next week able to tape another podcast because you didn't get stuck somewhere. Michael Calore: I think metaphysically, we're all stuck somewhere right now, I think. Lauren Goode: Yeah, we're in the middle of some big transitions. That's probably the one thing that we have in common with Elon Musk. Katie Drummond: Touché. Michael Calore: Back in the first week of January, we put out an episode of this show that was all about DOGE, the so-called Department of Government Efficiency. I would say it was our very first DOGE episode, if I'm remembering correctly. And we talked about the key players, the goals of the group, and the ins and outs of government spending. A lot has happened since then. And now, Elon Musk, says that he's stepping back from his full-time role at DOGE. There are still many unanswered questions about where DOGE stands now, including if and when Elon's exit will happen, but we're wondering what actually has been accomplished during Musk's time with the DOGE Bros. So, today in the show, the latest on DOGE and what it may look like post-Elon. This is WIRED's Uncanny Valley , a show about the people, power, and influence of Silicon Valley. I'm Michael Calore, Director of Consumer Tech and Culture here at WIRED. Lauren Goode: I'm Lauren Goode, I'm a Senior Writer at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's Global Editorial Director. Michael Calore: So, I want to start by asking a question that we asked in our last deep dive on DOGE, because I think the answer may have changed since then. At this moment, just a few months into Trump's second term as President, May 2025, what exactly is DOGE? Lauren Goode: Well, I wish it was a figment of our imagination. Katie Drummond: Yes, I wish that it was a fever dream, but that is still the big question, incredibly enough. And I think at WIRED, we've actually been very careful when we characterize DOGE in our reporting, we often, or always, use the term, "so-called." The so-called Department of Government Efficiency, because it doesn't really actually exist. And as some WIRED reporters pointed out last month, I think it was Zoë and Kate, it's almost a metaphysical question at this point. And that was in relation to employees at the General Services Administration, despite the fact that there are at least half a dozen DOGE operatives on payroll at that administration, despite the fact that there is a section of that building that is for DOGE use only and is a secure facility within the GSA, that the acting head of the GSA actually said, in an all-hands, that there was no DOGE team working at the GSA. Which begs the question, well, who are these people then and who do they work for? I think in a more practical way, there are two DOGEs. There's US Digital Service, which was essentially hijacked and repurposed by the administration, now known as the US DOGE Service. Sure. And then there's a temporary organization within the US DOGE service, called, obviously, the US DOGE Service Temporary Organization. And that organization is ostensibly in charge of carrying out the DOGE agenda. So, I think all of this semantic BS aside, what is DOGE? Well, it is the brainchild of Elon Musk. It is something that the president got on board with very early, and DOGE is effectively a collection of typically young, I think almost always male, technologists who come from companies that Musk and Peter Thiel do run or have run. Despite what the acting head of GSA says, there is a DOGE, and it is made up of these dozens and dozens of technologists who are working inside all of these different agencies. That is what DOGE is, whether it's a real department or agency or not, that's what it is. And we have a pretty good sense now, in May, of what they're actually doing. Michael Calore: And it's important to note that they did make a number of hires, dozens and dozens of people who they hired to be a part of DOGE, who are now installed in various agencies around the federal government. Lauren Goode: And a lot more layoffs too. Michael Calore: Yeah. Well, we have been doing a lot of reporting on DOGE. As Katie, as you just mentioned, WIRED has been on top of the story ever since the beginning, because we know Elon and we know his playbook. So, what are some of the stories that WIRED has done over the last few months on DOGE that have just totally blown your mind? Katie Drummond: Wow. There are a lot. I think the reporting that we have done around what DOGE is doing using AI and using all of the data that they've been able to access to actually surveil immigrants, I think that that reporting is incredibly disturbing. I think it is beyond the worst fears of folks in late January, early February as DOGE's work was getting underway, the idea that this kind of thing could happen and that it could happen so quickly, it certainly was talked about. It was speculated in terms of what do you think they're going to do? What are they after? There were a lot of hypotheses at the time. I don't think anyone anticipated that we would see that kind of work happen so quickly and in such a dystopian way. And then, I think, it hasn't blown my mind, but I really like the coverage that we've done around how recruiting for DOGE happens. And we just published another story on this recently, I think it was a couple of weeks ago. It was in early May, from Caroline Haskins and Tori Elliot, that was about another round of recruiting that's happening for DOGE. And this recruiting always seems to happen in these Slack groups for alumni of various tech companies, this time it was Palantir, and this guy, this entrepreneur, went into the Slack room and basically said, "Hey, I'm looking for people who would be excited to design and deploy AI agents who could free up at least 70,000 full-time government workers over the next year." And in the way he phrased it, he was saying, "These agents could free up these 70,000 people for," quote, "higher impact work." Which begs the question, higher impact work in the private sector after you fire all of them? Exactly what is the plan? And that story was really interesting to me because of how, first of all, I think how the recruiting happens is really interesting. I think the fact that it's happening, they're specifically targeting alums from certain companies, that this is happening in Slack groups and message boards. I think that's interesting. But I thought that the way that message was received was fascinating, given that we're now in May. And so, people have seen DOGE play out over the last few months. We wrote, "Eight people reacted with clown face emojis, three reacted with a custom emoji of a man licking a boot. Two reacted with a custom emoji of Joaquin Phoenix giving a thumbs down in the movie Gladiator. And three reacted with a custom emoji with the word 'fascist.'" So, it was just interesting to me to note that alums of a company like Palantir are looking at that message, and at least some of them are saying, like, "Nah, I see what you're doing here. And this is not only not compelling to me as a recruitment effort, but actually fascist." Lauren Goode: Now, I should mention that I happen to have been on a short book leave at the start of this year— Katie Drummond: Good timing. Lauren Goode: When ... Great timing. Katie knows I came back, and I was lamenting to her via our Slack, like, "Katie, I'm literally never taking leave again because so much happened." And starting in late January, I started to see WIRED's incredible reporting, watching it from afar and seeing all this news come out about DOGE, and just was like, "What is happening?" And one of the things that stood out to me almost immediately was this juxtaposition of cuts to the federal workforce and also cuts to federal spending, like the $1 limit that was placed on federal employees credit cards— Michael Calore: Oh, gosh. Lauren Goode: And how much this limited their ability to do their job, like running out of toilet paper, running out of printer paper, not being able to just do office functions as a federal employee, juxtaposed with Trump's incredibly lavish candlelight dinners and the crypto scheme we talked about last week, and all of the ways in which it seems like there are members of this administration who are simply lining their pockets as they have dispatched DOGE to make all of these cuts. If you just step back from that, it's hard to see, at this point, how this benefits America. What has actually happened here? Michael Calore: I think probably my favorite story is one of our most recent ones about the Library of Congress, and how two gentlemen showed up to the Library of Congress and said, "Hi, we work here. You need to let us in." Capitol Police said, "No. Who are you? Can you identify yourselves?" And they showed him a note from DOGE saying that they worked there and that they should let them in. And the Capitol Police turned them away. And it turns out they did actually work there. They had a note from Daddy. Lauren Goode: Please never call him that again. Katie Drummond: Oh, boy. Michael Calore: So, back when we first started talking about DOGE, at the beginning of the year, it was actually two people. It was Elon Musk and Vivek Ramaswamy. I think a week after we published that episode, Vivek was out. Lauren Goode: Has anyone heard from Vivek? Katie Drummond: I don't think about him. I don't know him. I don't know that man. No. Isn't he running for governor? Lauren Goode: I was going to say he's running for governor of Ohio. Wasn't that the plan? I like how we're all Googling this. Katie Drummond: He's pivoted. Michael Calore: Well, it's important to think about who's running it now, because Elon says he's only going to be around one to two days a week. He says he will continue to do work for DOGE and for President Trump until the end of Trump's term, whatever year that may be. He's going to be scaling back. He's going to go on 20% time, basically. So, who are the people who are still there? Who are the people? Who are the names that we now need to know? Lauren Goode: I think AI agents are going to be running all of it. Katie Drummond: Well, obviously they're apparently replacing 70,000 federal workers with them within the year. Obviously, there are some very high-profile members of DOGE after just a few short months. There's Edward "Big Balls" Coristine, this 19-year-old appointed by Musk who owns LLC. I'm sure everyone is familiar with Big Balls at this point. There are plenty of other young inexperienced engineers working across these agencies, and then there are the adults in the room. There are people like Steve Davis, who is one of Musk's, really, right-hand men who works closely alongside him at a number of his companies, and has been working with him in the federal government. And we also, of course, know that they are still actively recruiting, again, largely from companies that Musk himself own. So, I think that the whole point of all of this is that, yes, Elon Musk is scaling back. So, let's say he scales back, let's say he decides to part ways with DOGE and the administration altogether. DOGE is already embedded in the federal government. He accomplished what he set out to do, in so far as we now have DOGE team members, DOGE operatives at dozens and dozens and dozens of federal agencies. They very clearly have their marching orders, they're carrying out work. So, at this point, you can't claw that all back, and that doesn't leave the federal government just because Elon Musk potentially leaves the government. The damage is done. I do think it's important to note here, and I know this will come up over and over because I'm going to keep bringing it up. Elon Musk at two days a week, is a lot of Elon Musk. 20% of Elon Musk's time going to the federal government, sure, he won't be in the weeds seven days a week, 24 hours a day, but that's a lot of Musk time. So, I do think it's important to be cautious, and I just say this to all of our listeners and to everyone out there, this idea that Musk is disappearing from the federal government or disappearing from DOGE, the administration might want you to think that that's what's happening. I suspect that that is not at all what's happening. That said, from all appearances, Elon Musk might be less involved in DOGE, but DOGE is going to keep on keeping on. Michael Calore: And while it's trucking, what is Elon going to be doing? What does he say? Lauren Goode: Yeah, what is he going to be doing? Katie, do you have a sense of how much of this is related to the fact that Tesla isn't doing so well right now? Katie Drummond: Well, I suspect that that's a big factor, but I think so much of the narrative externally, and even people at Condé Nast who have come up to me to be like, "Elon, he's out. Is it Tesla? Why is he leaving DOGE?" This is optics. This is narrative. His company is in the tubes, it is really struggling. They needed a way to change that story, and they needed a way to change that story very quickly. The best way that they could change that story was to say, "No, no, no, no, no. Don't worry. Elon Musk is not all in on DOGE and the federal government. He is going to be stepping back and he's going to be focusing on his other companies." Even just Trump saying that, Musk saying that, that being the narrative that plays out in the media is incredibly helpful for Musk, particularly in the context of Tesla, and just the board, and shareholders, and their confidence in his ability to bring this company back from the brink. So, do I think that he's pulling back and will be spending less time with DOGE? Yes. Do I think a lot of this was just smoke, and mirrors, and optics, and narrative and PR? Yes, it was incredibly well-timed right as Tesla was really, really, really in the tubes and getting a ton of bad press. Elon Musk makes this very convenient announcement, right? Lauren Goode: Mm-hmm. Right. And this is something that the venture capitalist and Musk's fellow South African, David Sacks, has said, "It's just what Musk does." He said he has these intense bursts where he focuses on something, gets the right people and the structure in place, feels like he understands something, and then he can delegate. And he's just reached that point with DOGE. He's in delegation mode. Katie Drummond: Yes, it seems like he has all the right people in place, and a structure that is so clear and transparent to the American people, that it's time for him to move on. Michael Calore: And I do think that he is going to have to figure out the Tesla situation. As you said, the company's really struggling, and there are a lot of reasons for that. There are no new Tesla models for people to buy, even though they were promised. There have been a bunch of recalls. People are just hesitant about buying a new EV right now anyway, for a number of reasons. But it's really, it's him that people don't like. So much like the damage that he has done to the structure of the federal government with DOGE, similarly, he has done damage to Tesla, the brand, by his association with the policies of the Trump Administration, and his cozying up to the President, and his firing, and destroying the rights of people. Katie Drummond: And isn't it also true that all of these problems with Tesla, all of the problems, aside from Elon Musk himself, those problems were happening or were poised to happen regardless, like issues with new models, with recalls, that all predates his work with DOGE, unless I'm drastically misunderstanding how time works. So, those problems with the company existed and were bound to become a bigger deal at some point, and then it really feels like his work with DOGE and the federal government just added fuel to the fire. He just poured gasoline on all of his company's problems by participating with the Trump Administration in the way that he did. But the fact that Tesla is a troubled company is old news, and has nothing to do with the fact that Elon Musk is not a well-liked individual. So, it's just problem on top of problem. Michael Calore: That's right. That's right. And the damage is done, I think, at this point. He would probably have to move on from that company in order to fully turn it around. Katie Drummond: Well, we still have a lot of time left in the year, so we'll see. Michael Calore: All right, well let's take a break and we'll come right back. Welcome back to Uncanny Valley . When we talked about DOGE at the beginning of the year, it still felt just like an idea. The tone was decidedly different. We talked about how the group was named after a meme coin, and we all had a good laugh at the absurdity of it all. It was still unclear what would happen. And of course, since then, DOGE has gutted multiple federal agencies, dismantled so many programs, fired a bunch of people, built a giant databases to track and surveil people, among other things. Katie Drummond: So, I wasn't actually with you guys on the show when you talked about DOGE in January, but I was listening to the show, and I remember you talking about Musk's plans to, quote, "open up the books and crunch the numbers to cut costs." Sounds very exciting. And cutting some of those costs, of course, had to do with laying people off. Now, I remember that because Zoë Schiffer, who hosts the other episode of Uncanny Valley , said she would be surprised if any, quote, "books were even opened." So, what did we see actually happen from that prediction to now, from January to May? Lauren Goode: I want to give Zoë a shout-out here because I think the context of that was me saying, "Oh, I wonder how they're going to go about this careful, methodical process of doing the thing." And so he was like, "This is going to be utter chaos. They're not going to open any books." Katie Drummond: She was right. It has been chaos. Lauren Goode: So we also said that the New Yorker reported Vivek had joked at one point that he was going to do a numbers game. You would lose your job if you had the wrong Social Security number. That didn't actually happen, but Zoë surmised at the time that this was potentially going to be run off of the Twitter/X playbook, run like a chaotic startup. And that's true. I definitely did think there would be more of a process to what DOGE was doing, so I was wrong. There was process. They have systematically terminated leases for federal office buildings, or taken over other buildings. They're reportedly building out this big master database. They've gutted public agencies like the CDC, and regulatory bodies like the CFPB, the Consumer Financial Protection Bureau. So they've done a lot. I think the part where I thought there would be more process was around the people, the human capital of all this, like the federal workforce. And so, maybe in a lot of ways, this is just like some startup, you're acting recklessly and worrying about the human beings you're affecting later. Michael Calore: And I think the thing that we also predicted correctly was that if DOGE has a chance to shape the regulatory agencies in the federal government, they would shape those agencies in a way that benefit people who are in their industry. Lauren Goode: Right. Katie Drummond: I think one of the questions you guys were asking back in January was whether or not the administration was bringing in these guys. It was Musk and Ramaswamy at the time, because they actually wanted them to advise on how technology is used as part of government services, as part of the way the government works, or because they thought the two would be influential over the types of regulations that are rolled back or introduced. So, man, it's crazy to even say all of that, knowing what we know now about ... It's just interesting, in January, we knew so little, we were so naive. But what do you think now about why Musk, in particular, was actually brought on board? Lauren Goode: Well, honestly, I think that they have done both. WIRED has reported that DOGE is building out a master database of sensitive information about private citizens, and a database that will reportedly help them track immigrants. And we know they're playing around with these AI agents, like you just talked about, Katie. And so, we know that they were brought in to apply that technology building mindset to government services, if you want to call it that. But I think that they also are influencing policy, because on the policy side, we've seen, I mentioned David Sacks, he's Trump's crypto and AI Czar, and he's been weighing in on cryptocurrency and stablecoin regulations. Even if that hasn't been pushed through yet, he's certainly in Trump's ear about it. Musk has also been pushing back on Trump's tariff policies. Musk has been expressing his opinion on immigration policies. Those are just a few examples, but safe to say, he has Trump's ear. Michael Calore: I think at the beginning I was cautiously interested in the IT consultant part of it, like the DOGE mission to come in and modernize the federal government. Obviously, if you've ever dealt with federal government agencies, as a person who's computer-literate, sometimes you are just completely flabbergasted by the tools that you have to use to get access to services in this country. So yes, guys, come in, do your thing, zhuzh it up, make it work better. Of course, that is absolutely not what happened. But I was excited about the prospect of that maybe happening. And it turns out that they really took the opportunity to take all of the data that are in all of these agencies and put it all together into one giant input, fed into various systems that are going to process that data and find efficiencies in ways that are probably going to affect human beings negatively. A computer is really good at doing very simple tasks over and over again. It doesn't necessarily understand the nuances of how things are divided up equitably among different sectors of society, it doesn't understand the nuances of people's personal situations. So, that's the modernization that we're going to see, I think, of government systems. And that's frightening, that wasn't what I was expecting. Katie Drummond: Now, we've talked a little bit on and off in this episode already about AI. AI has played a much bigger role with DOGE than maybe we thought it would, maybe we hoped it would, in January. So, let's talk about that. As far as we know now, what does DOGE aspire to do with AI, and how were you thinking about that in January, if you were thinking about it at all? Lauren Goode: I still feel like I don't really understand what they're trying to do with AI, frankly. Katie Drummond: Maybe they don't. Lauren Goode: We know at this point that there are AI officers and leaders in the federal government. We mentioned David Sacks before, who was put in charge of crypto and AI. There is now the first ever AI officer at the FDA, Jeremy Walsh. WIRED has reported that OpenAI and the FDA are collaborating for an AI assisted scientific review of products. Our colleague, Brian Barrett, has written about the use of AI agents. In particular, Brian wrote, "It's like asking a toddler to operate heavy machinery." Social Security Administration has been asked to incorporate an AI chatbot into their jobs. And we've also reported on how the GSA, the General Services Administration has launched something called the GSAI bot. But we also later found out that that's something that was based on an existing code base, a project that existed prior to DOGE taking over the building. I think the short answer is that when DOGE first started, we didn't really have a clear sense of how they were going to use AI. And even right now, after saying all that on this podcast, I cannot pretend to understand fully what they are doing with AI. And that's either due to a lack of transparency, or just the fact that it all seems very disparate, very scattered. I'm not going to sit here on this podcast and pretend to make sense of it. Michael Calore: With a lot of this stuff, it's hard to understand where the DOGE initiatives end, and where just other initiatives in the federal government begin. I think simply because there's a lack of transparency about how these decisions are being made, who's advising who, and who's really drafting the memos. When we think about what is AI going to do, we have to consider what an AI agent is. It is a program that can do the same work as a human being. And that's just the broad definition of it. So, you can deploy an AI agent to write emails, make phone calls, fill out paperwork, whatever it is. You're just basically doing admin work, and there is a lot of admins in the federal government, and I think that that is in our future. People have this cozy idea that their experience with AI is maybe ChatGPT or Siri, or something like that. So, "Oh, you have a problem with your taxes, you can just talk to the IRS chatbot and it'll solve it for you." That sounds like a nightmare. I can't imagine that any IRS chatbot is going to be able to solve any problems for me. It'll probably just make me mad and make the problems worse or the same. But when you think about, "Okay, here is an opportunity for us to use these AI agents in a way that will increase efficiency across the government," what you're really talking about is just we don't need these people anymore and we just need to replace them with the technology. Katie Drummond: One of the pieces of this that I think is so consequential, I remember maybe a year and a half ago, talking to a bunch of civil servants, people in decision-making roles across federal agencies, and they were all asking a lot of questions about AI. They were very curious about AI. The Biden Administration executive order had put forth all of these different demands of different agencies to investigate the potential for AI to do X, Y, or Z within their agencies. So they were in that exploratory process. They were very slow to think about how AI could be useful within those agencies, and that's for the bureaucracy reasons, but it's also because the work of these federal agencies, you don't really want to get it wrong. When we're talking about the IRS or we're talking about payments from treasury, we're talking about evaluating new drugs via the FDA, you want to be right. You want to reduce the risk of error as much as possible. And I think for so many people in technology, there's this notion that technology outdoes human performance just inevitably. It's inevitable that a system will do a better job than a human being who is fallible, who makes mistakes. That said, what we know about AI so far, generative AI in particular, is that it makes a lot of mistakes. This is very imperfect technology. AI agents are not even really ready for primetime within a private company for one individual to use in their own home, let alone inside the federal bureaucracy. So, I do think that a lot of what DOGE has done with AI, like Lauren, to your point about them building on top of this existing AI initiative at the GSA, is they're taking very preliminary work in AI at these agencies, and they're just fast tracking it. They're saying, "This is going to take three years. No, no, we're doing this in three weeks." And that's scary, given what we know about AI and how effective and how reliable it is right now. So, does anything stand out to you guys about that in the context of what we're talking about around AI and DOGE, and AI in the federal government? What are some of the risks that really stand out to you guys? Lauren Goode: I think that it is consequential when you think about AI being used in such a way that it ends up impacting people's jobs, right? Katie Drummond: Right. Lauren Goode: But I actually think that that idea of AI agents doing the jobs of humans at this point is a little bit optimistic. And when I think about what feels more consequential, is this idea of AI just becoming a code word or a buzzword for what is essentially very, very, very advanced search. So, if they are able to build this master database that creates some sort of profile of every US citizen, or every US non-citizen, and is pulling in from all these different data sources, both within government agencies, but public documents, and across the web and across social media, and anything you've ever tweeted, and anything you've ever said, and anything you've ever done, and if you've ever gotten a parking ticket or a DUI, or you've committed a crime, or anything like that, to just hoover that all into one centralized location and be able to pull that up in a citizen of the drop of a hat, that, to me, feels more consequential and potentially more dangerous than going to the Social Security website and having an annoying bot trying to answer your questions for you. Michael Calore: It's surveillance creep, really is what it is. And marry that with computer vision, like face recognition and the ability to photograph everybody who's in a car at the border, cross-reference that with government documentation like passports and driver's licenses, and you have a whole new level of surveillance that we have not dealt with before in our society. Katie Drummond: Now, not to be all negative Nelly, because we often are, but does any ... What? Michael Calore: What show are you on? Katie Drummond: You know me, the Canadian. Does anything stand out to both of you as having actually been good from all of this? So, DOGE takeover January to May, anything potentially exciting? Any bright spots, anything where we should be a little bit more generous in our assessment and say, "You know what, actually, as dystopian and scary as a lot of this is, this potentially a good thing, or this is unequivocally a good thing"? Anything like that that stands out to either of you? Lauren Goode: I would say that if there's one area where we could be a little bit more generous, it might be that if this turnaround of the federal government was something that was being done in good faith, then I might give them a pass after just five months. I might say ... Katie, you've done turnarounds before? Katie Drummond: I have. Lauren Goode: They take longer than five months, right? Katie Drummond: They do. Lauren Goode: Yes. Okay. Katie Drummond: Depends on the size of the organization. With the federal government, you're looking at five to 10 years. Lauren Goode: Right. Exactly. So there's that. In terms of the actual cuts to fraud and abuse as promised, as far as we know and has been reported by other outlets, the actual cuts that DOGE has made fall far below what Trump and Musk had promised. Initially, they said that they were going to slash $2 trillion from the federal budget. That goal was cut in half almost immediately. The latest claims are that 160 billion has been saved through firing federal workers, canceling contracts, selling off the buildings, other things. And PR just reported that the tracker on DOGE's own website is rife with errors and inaccuracies, though. The wall of receipts that DOGE has been posting totals just $63 billion in reductions, and actually, as of late March, government spending was up 10% from a year earlier. Revenue was still low. So, we're still in a deficit, in terms of federal spending. There is one thing I've heard from folks in Silicon Valley they think is a good thing. It's Musk's pushback on some of Trump's immigration policies, specifically those that affect high-tech workers. During Trump 1.0, the denial rates for H-1B visa spiked, and Trump said he wanted to end, forever, the use of H-1 B visas, he called it a cheap labor program. Now, he has flip-flopped a bit. Stephen Miller, his Homeland Security Advisor, Deputy Chief of Staff, has been pushing for more restrictions on this worker visa. But Musk, who actually understands how critical this visa is for the talent pipeline in Silicon Valley, maybe because he's an immigrant, I think has managed to sway Trump a bit on that. And so, for obvious reasons, perhaps people in Silicon Valley say, "Well, I think this is actually a good thing that Musk is doing." Michael Calore: I'll point out two things. Lauren Goode: Go ahead. Michael Calore: One, the LOLs. The press conference that they did in the Oval Office where Elon brought his child— Katie Drummond: Oh, that was good. Michael Calore: That was definitely a big highlight for me. But seriously, the other thing is that people are really engaged now. You talk to people who are somewhat politically minded, and they have opinions about government spending, they have opinions about oversight and transparency, they have opinions about what actually matters to them. Like what do they need from their government, what do they want their government to do for them. Those were all nebulous concepts even five, six months ago that I think are at the top of everybody's mind now. And I think that is a good thing. Katie Drummond: Oh, I love that. A galvanized and engaged public— Michael Calore: That's right. Katie Drummond: As a plus side to DOGE. I love it. We're going to take a quick break and we'll be right back. Michael Calore: Welcome back to Uncanny Valley . Before we wrap up, let's give the people something to think about, our recommendations. Katie, why don't you go first? Katie Drummond: I have an extremely specific recommendation. Do either of you use TikTok? Lauren Goode: I do sometimes. Michael Calore: Define use. Katie Drummond: Scroll. Lauren Goode: Yeah, scroll maybe like once every couple weeks. Katie Drummond: Do you thumb through TikTok? Michael Calore: I'm familiar with it, yes. Katie Drummond: There is an account on TikTok called Amalfi Private Jets. It is the account of a private jet company. This is the most genius marketing I have ever seen in my life. For someone who likes reality TV and trash, which is me. It's these little 60-second reality TV episodes, where the CEO of Amalfi Private Jets is on the phone or he's on a Zoom with one of his clients, often, I think her name is McKenna. She's a young, extremely wealthy, entitled little brat, and she'll call him up in the clip, he's at his office. He's young and handsome, and he's like, "Hey, McKenna." And she's like, "Hey, Colin. So, my dad said that I had to fly from Geneva to London," and blah, blah, blah. And then there's this whole dramatic narrative around McKenna and why she needs a $75,000 jet immediately, and she needs it to have vegan spinach wraps refrigerated. It's just these very dramatic little vignettes of what life is like for the rich and fabulous who are calling Amalfi Private Jets to book their private jets. So there's that account. And then, once you go down the rabbit hole of that account, the TikTok algorithm will start serving up these companion accounts they've created, like the CEO of the company has one, his girlfriend has one. I think McKenna now has one. And so, there's this little cinematic universe of Amalfi Private Jets on TikTok, and you get sucked in, and you get to know all of these people. And it's a little vertical video reality show experience that I highly recommend if you only have 60 seconds, which then turn into two hours, which then turn into pulling an all-nighter to learn everything about Amalfi Private Jets, their CEO, his girlfriend, and their wealthy clientele. This is the TikTok for you. Enjoy. Michael Calore: This is genius. Katie Drummond: Thank you. Lauren Goode: This is the reality TV of the future. Katie Drummond: It's incredible. Lauren Goode: It has arrived. Katie Drummond: And you know what? And I just did their job for them, because it's marketing for their company. They got me. Michael Calore: All right, Lauren, what's your recommendation? Lauren Goode: My recommendation might go nicely on your Amalfi Private Jet. Hear me out, peonies. You guys like flowers? Michael Calore: Oh, peonies. Lauren Goode: Peonies. Katie Drummond: I like flowers. Michael Calore: Sure. Lauren Goode: Do you like peonies? Katie Drummond: I couldn't tell one from another, but I like them. Lauren Goode: They're beautiful. It's peony season here. I'm saying that now with the O annunciated, which is how I would do if I was giving my architectural digest home tour. Michael Calore: I see. Lauren Goode: Yes, these are peonies. Katie Drummond: Oh, I'm just looking at Google images of them. They're very nice. Lauren Goode: Aren't they beautiful? Katie Drummond: They're very nice. Lauren Goode: The cool thing is they do have a very short-lived season. In this part of the world, it's typically late May through June. If you plant them, they only bloom for a short period of time. If you buy them, they're these closed balls, not to be confused with Edward Coristine "Big Balls." They're these closed balls, and then after a few days they open up and they're the most magnificent looking things. They're really, really pretty. And I got some last week at the flower shop, and when they opened, I was like, "Oh my God." It just made me so happy. And they're bright pink. And so, if you're just looking to do something nice for yourself, or someone you just want to pick up a nice little thoughtful gift for someone, get them some peonies. You know what? I didn't check to see if they're toxic to pets. So, check that first, folks. But, yes. Michael Calore: That's great. Katie Drummond: Mike, what's yours? Michael Calore: So, I'm going to recommend an app. If you follow me on Instagram, Snackfight in Instagram, you may notice that I have not posted in a long time, and that's because I stopped posting on Instagram, and I basically just use it as a direct message platform now. But there are still parts of my brain that enjoy sharing photos with my friends, so I found another app to go share photos on and it's called Retro. Lauren Goode: Yeah, Retro. Michael Calore: So, it's been around for a while, but I went casting about for other things out there, and I found that there was a group of my friends who are on Retro, and I was like, "Oh, this is great." It's very private. By default, somebody can only see back a couple of weeks. But if you would like to, you can give the other user a key, which unlocks your full profile so that they can look at all of your photos going back to the beginning of time, according to whenever you started posting on Retro. I really like that about it, the fact that when I post a photo, I know exactly who's going to see it. There are no Reels, there's no ads, there's no messaging features, there's no weird soft-core porno on there, there's no memes. It's just pictures. And I really like that. It's like riding a bicycle through the countryside after driving a car through a city. It's like a real different way to experience photo sharing, because it's exactly like the original way of experiencing photo sharing, and I'd forgotten what that feels like. Katie Drummond: Oh, it sounds lovely. Lauren Goode: What's cool about the app too is when you open it and you haven't filled out that week's photos, when you tap on it, it automatically identifies those photos from that week in your camera roll. It's like, "You shot these photos between Sunday and Saturday, and here's where you can fill this weekend." Michael Calore: And— Lauren Goode: It's pretty cool. Michael Calore: And all the photos from the week stack up. So, if you post 12 photos, and then you look at my profile, you can just tap through all 12 photos, and then that's it. That's all you get. Lauren Goode: Good job, Nathan and team. Michael Calore: Who's Nathan? Who are you shouting out? Lauren Goode: Nathan Sharp is one of the cofounders of it. He's a former Instagram guy. I think his cofounder is as well. It was founded by two ex Instagram employees. And the whole idea is they're trying to make, it's not the anti-Instagram, but it is more private. Michael Calore: Feels like the anti-Instagram right now. Lauren Goode: It's nice. It's a nice place to hang out. Michael Calore: Well, thanks to both of you for those great recommendations. Lauren Goode: Thanks, Mike, for yours. Katie Drummond: Yeah, Mike, thanks. Lauren Goode: Thanks, Mike. Katie Drummond: Bye. Lauren Goode: See you on the jet. Michael Calore: And thanks to you for listening to Uncanny Valley . If you liked what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, please write to us at uncannyvalley@ We'd love to hear from you. Today's show is produced by Kyana Moghadam, Amar Lal at Macro Sound mixed this episode. Jake Loomis was our New York Studio engineer. Daniel Roman fact-checked this episode. Jordan Bell is our Executive Producer. Katie Drummond is WIRED's Global Editorial Director, and Chris Bannon is the Head of Global Audio.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store