logo
#

Latest news with #MichaelCalore

How WIRED Analyzed the Epstein Video
How WIRED Analyzed the Epstein Video

WIRED

time2 days ago

  • Entertainment
  • WIRED

How WIRED Analyzed the Epstein Video

Photo-Illustration: WIRED Staff; Photograph:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. the DOJ recently released what they described as raw footage from the night of Jeffrey Epstein's death in 2019. When WIRED's Dhruv Mehrotra went through the metadata, he found that it had been, in fact, modified. In today's episode, we dive into what Dhruv found and what it means. Mentioned in this episode: The FBI's Jeffrey Epstein Prison Video Had Nearly 3 Minutes Cut Out by Dhruv Mehrotra Metadata Shows the FBI's 'Raw' Jeffrey Epstein Prison Video Was Likely Modified by Dhruv Mehrotra You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: Hey everyone, this is Mike. Before we start, I want to take the chance to remind you that we want to hear from you. Do you have a question around AI, politics or privacy that has been on your mind, or just a topic that you wish we talked about on the show? If so, you can write to us at uncannyvalley@ And if you listen to and enjoy our episodes, please rate the show and leave a review on your podcast app of choice. It really helps other people find us. How is everybody doing this week? Katie Drummond: I'm doing well. I am recovering from a vacation. I went to Detroit, Michigan with my family. If you haven't spent time in Detroit, I highly recommend it as a vacation destination, which may surprise you, depending on what kinds of vacations you like to take. But if you like to spend time in interesting cities with great food, you could spend some time in Detroit, it was awesome. So, I'm good. You come back from vacation and you're like, you hate everything, and so I'm there, but I'll get out of it. I'll be fine. Lauren Goode: Is it the purpose of a vacation though, to give you a little bit of relaxation and perspective so that you don't hate everything? Katie Drummond: That's not the kind of vacation I take, Lauren. And- Lauren Goode: Okay. We need to get you to a spa. Michael Calore: So, Detroit's famous for its pizza, for its square pizza. How does it rate? Lauren Goode: And its cars. Michael Calore: Yeah, and its cars, but really, we're more concerned about pizza. Katie Drummond: Car, great, great roads. We had pizza twice. We had Detroit style pizza night one. I mean, it's delicious. I find it very heavy. I like to power consume pizza, so I'm like 3, 4, 5 pieces of pizza. You can't do that with this pizza. You know what I mean? Michael Calore: Yes. Katie Drummond: So I find that to be kind of a bummer, but it's good, it's just very heavy. Lauren Goode: Katie, I have a very important question for you. Katie Drummond: Okay. Lauren Goode: Do you eat your pizza with a fork? Katie Drummond: No. Lauren Goode: Thank God, okay. Katie Drummond: Do you? Lauren Goode: We have shaken it out. We can keep going. Katie Drummond: Definitely not. Lauren Goode: Okay. No. Katie Drummond: No chance. Lauren Goode: No. Katie Drummond: No, no chance. Lauren Goode: As John Stewart once said, "You fold it and you eat it." Katie Drummond: Yes. Dhruv Mehrotra: I generally fold mine into a small sphere and I just take the whole thing and put it right in my mouth. Lauren Goode: I like that. Who is that voice who just joined us on this podcast? Dhruv Mehrotra: It's me, it's Dhruv. Lauren Goode: It's our resident conspiracy theorist. Michael Calore: This is WIRED's Uncanny Valley , a show about the people, power, and influence of Silicon Valley. Today, the Jeffrey Epstein prison tapes. Likely, you have heard that the US Department of Justice and the Federal Bureau of Investigation has released nearly 11 hours of footage from a camera outside Epstein's prison cell. The tape was from the night before he was found dead in his cell in 2019. When it was made public, we here at WIRED immediately analyzed the footage and found that it had been modified, and that nearly three minutes seemed to have been cut from the feed. WIRED's Dhruv Mehrotra and independent video forensic experts went through the footage's metadata and found that it was likely modified using Adobe video editing software. This news has led to a deluge of speculation across the internet about what the edits to the footage mean, if anything. We'll dive into what this means and why the Epstein case has proven to be a surprising fracturing point for Trump's right-wing base. I'm Michael Calore, Director of Consumer Tech and Culture here at WIRED. Lauren Goode: I'm Lauren Goode, I'm a senior correspondent at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's Global Editorial Director. Michael Calore: We are also incredibly lucky to have with us today our WIRED colleague who analyzed this recent released footage himself on the show. Please welcome Dhruv Mehrotra. Dhruv Mehrotra: Hi, thanks for having me. Michael Calore: Okay, well, let's start with why this video was released at all. It's been almost six years since Jeffrey Epstein's death, hasn't it? Lauren Goode: It has been. Jeffrey Epstein died in August of 2019, and pretty much from the start, his death has fueled all of these conspiracy theories, partly because of his very high profile associations. He was known to be friends with and business associates with and fraternize with celebrities and high ranking officials, and even technologists, also because of the explosive nature of the sex trafficking charges that Epstein was facing. So this all kind of had the elements that the internet needs to fuel a giant conspiracy theory. Now, one of the elements that fueled these theories was the fact that around the time when Epstein died by suicide, there was a malfunction in the cameras where he was being held, at the Metropolitan Correctional Center. About half their cameras weren't working, none of them had a clear view of Epstein's cell door, this has been previously reported by WIRED. And people have speculated that this is when he could have been killed, even though there's no proof whatsoever of that. Katie Drummond: And we'll talk more about this later, but this issue is still alive and thriving online because Trump and his allies really capitalized on these conspiracy theories, all the theories about Epstein and what happened to him in that prison, on the campaign trail. So, Trump promised running his campaign for his second term that his administration would release allegedly explosive revelations about what really happened when Epstein died in custody in 2019, and Epstein's supposed "client list." So for months leading up to the joint memo that the DOJ and FBI published last week, Pam Bondi, the Attorney General, had promised to release records related to Epstein. So, some of those have been slowly released, and this latest video that Dhruv has been reporting on is the latest piece of that rollout. So, that's really how we got here. Michael Calore: Yeah. Now, Dhruv, how did you go about analyzing the footage and how did you arrive at the conclusion that the video wasn't actually raw footage as it was described? Dhruv Mehrotra: Yeah, so on Monday last week, the DOJ and FBI released two videos, and there's sort of almost immediate speculation about whether or not these videos were doctored. So I wanted to look at the files themselves and not the sort of video feed, but examine the metadata of the video just to figure out if there was anything in there that I could see that could lead to any clues about it being doctored. This is something I often do because as a journalist, you're often vetting leaked or hacked documents to see if they were tampered with or something like that. So the first step was downloading both versions of the video that the FBI released, the so-called raw version, and then the enhanced one. They said that they had, alongside the raw video, they say they released a video that had some enhancements that helped their analysts come to the conclusion that Epstein in fact killed himself. So together, each video was about 21 gigabytes. I ran both of those files through this metadata analysis tool and I looked at what is called XMP metadata, which is basically data that's embedded by software on a file when it's touched, essentially. So looking at that data, it came became pretty clear pretty quickly actually, that it wasn't a direct export from a surveillance system as the FBI kind of described it in their memo. Instead, the metadata showed that the video had been assembled from two distinct clips, two different MP4 files, using Adobe Premiere. It even names the files in the metadata. So those files are listed in a metadata section called ingredients, which is how Premiere tracks source materials that are used in a project. And in that metadata, we also saw that the project was saved multiple times and that there were internal markers and comments left behind, likely used to flag activity during the review. So, all of this is sort of standard for edited video workflows, but it really contradicts the DOJ's description of this as raw video. It's not raw, it was manually edited and stitched together as a composite of two different video recordings. Katie Drummond: It's really wild to me in the context of unforced error. And how did no one in the DOJ or the FBI think, oh, maybe someone is going to look at the metadata? That's one very obvious question, but then Dhruv, on a more sort of technical note, if someone has that foresight, if they think, okay, there will be metadata attached to these files, dah, dah, dah, dah, dah, someone could look at it, what are their options? Is there anything that you can do to scrub the metadata on video files like these, that potentially they could have done if they didn't want some annoying reporter like you to go looking or poking around to see sort of what was in there? Lauren Goode: She says that with maximum affection, Dhruv. Katie Drummond: Of course. You're not annoying to me, but you're definitely annoying to them right now. Dhruv Mehrotra: Yeah, I definitely get the sense that I'm very annoying to them right now. Yes, there are plenty of things that you can do to scrub metadata from a file before you upload it somewhere for someone to download and inspect. In fact, that's sort of standard practice across the internet. I think most social media platforms, when you upload a photo from your iPhone or from your Android, it'll scrub the metadata before it puts it up on Instagram or whatever. And it's also standard practice for reporters to do that if we receive documents and we want to publish parts of the documents but not the entirety of them, we'll scrub metadata to make sure that nothing in the file can lead back to our sourcing. So it's standard practice and frankly, kind of surprising that the FBI didn't do it. With that said though, it also forensically, you could see a world where you would want to keep the metadata in to show the public, or show a judge, show attorneys exactly who touched a file before it was released publicly. So that way there was a sort of trail of custody to make sure that the evidence hadn't been tampered with. So, I can see it working both ways, but it seems like that's just neither of those cases is what happened here. Michael Calore: Do we know who touched this file? Dhruv Mehrotra: We do, sort of. We know that this file was edited by a Windows user with the name MJ Cole. I think that's a partial username, not a full username, but someone named MJ Cole, and perhaps a longer last name, I suppose. Michael Calore: Right. Lauren Goode: And we know, we have this name because this is the person who logged into the machine? This is the name attached to the Adobe license? How do we know this is actually the person? Because it could have just been another video editor logged into someone else's machine. Dhruv Mehrotra: No, that's a good point. I should say that we know that the user account that opened the file and edited it was MJ Cole. Whether or not someone else was using that Windows computer at the time, that's impossible for me to say with the metadata alone. Lauren Goode: Dhruv, I'm glad you mentioned that about the workflow of video journalists because a very long time ago I was a video journalist and I can attest to having exported the same video file multiple times from Apple Final Cut Pro or Adobe Premiere, with absolutely zero intention of manipulating the video content, just maybe trimming or making one small change in exporting the files. In this instance, what's notable about these file exports that could lead people to potentially believe that they were manipulated with mal-intent? Dhruv Mehrotra: Yeah, I mean, I think at first I was just trying to confirm that any kind of editing software had been used. Right? I think if you are just trying to export some proprietary surveillance CCTV footage to an MP4, there are lighter weight tools than Adobe Premiere, which has all of these actually additional AI based editing features to them. Right? And I think speaking to forensic experts, that was one of the big questions they had was like, look, if you're just going to export a video, why use Premiere? So that was one big question I had, right? But after getting a few tips, I also noticed more odd things in the metadata outside of them just using Adobe Premiere. Right? Specifically, there were timestamps in the source video clips that were composited to form the final video. The first source file is listed as being four hours and 19 minutes long, but the video that was actually used in the final output was only four hours and 16 minutes long, which means that nearly three minutes were cut from this original clip that was put into the video, which is, that's a sign of an edit, not just an export. When I spoke to Hany Farid, he's a UC Berkeley professor who focuses on digital forensics and frankly testifies in court often about manipulated media, he pointed out very quickly just watching the video that the aspect ratio of the footage changed throughout the video. So I think it was pretty widely reported when the video first came out that there was a missing minute. Attorney General Pam Bondi kind of attributed that missing minute to just routine turnover in surveillance footage and sort of just wrote it off as nothing to really be concerned about. But what's interesting about that is after that jump, the aspect ratio changes. You can see a little toolbar in the top right of the screen. So this isn't a raw export of surveillance footage, something else is kind of happening here. And we just don't know enough about the process the DOJ and the FBI used to analyze these videos to really say much. Katie Drummond: And obviously, we have plenty of sort of careful language in these stories that you've published Dhruv, but none of this proves that the videos were edited with the intention of deceiving people, of hiding something. Right? There are some pretty boring explanations for all of this, although I will say they're not exactly doing themselves any favors at all, which is so painful to witness because these do feel like a lot of real amateur hour unforced errors if in fact, this is all very banal stuff. Dhruv Mehrotra: Yeah. I mean, look, the goal of the story wasn't to prove or disprove a conspiracy. It was really to independently verify what the DOJ and FBI said about the footage, which is that it was a raw surveillance video. And based on the metadata, that's just not accurate. It was processed, reviewed, and assembled from multiple clips, and that doesn't mean that anything nefarious happened, right? There are plenty of non-sinister reasons that footage might have been reviewed or exported that way, especially if it's being prepared for public release. But when the DOJ labels something as raw and then doesn't disclose that it's been edited or stitched together, that opens the door to a lot of suspicion. And in a case like this, a high-profile case where there already is a ton of suspicion and conspiracy around Epstein and his life, his death, I think any ambiguity will just lead to more conspiracies. Michael Calore: Right, and a lot of people online have taken this as confirmation of a cover-up. Did you anticipate that when you were putting the story out? Dhruv Mehrotra: Yeah, and I think to Katie's point about all the careful language in the story, we fully expected people to run with this story and to use it to fit whatever flavor of Epstein conspiracy they subscribe to, which is part of the reason that we wanted to lay out the facts very, very clearly. Because if you don't explain what the metadata actually means, someone might not do it responsibly, right? So we wanted to be very careful with how we presented the findings. Katie Drummond: Yeah, and I think I will pat everybody involved in this reporting on the back, I'm very proud to have published it. I wouldn't think twice or blink at the idea of publishing the work. It was very carefully done and very sort of clearly explained and articulated in both stories. I think there is this sort of interesting feeling as an editor, and I think as a reporter, when you publish a story like this and you know, you have a fairly good sense of how the internet will respond to it, which is, this is going to kick up a lot of dust. There's going to be a lot of really problematic conspiratorial conversation about this work, but the work is the work. It's true, it's accurate, it's carefully vetted, it's carefully written. And ultimately, you have an obligation to publish it, even if you know that some very special groups of people on Reddit and elsewhere on the internet are going to absolutely lose their minds. I mean, that is just, you can know it and you publish anyway, and I think this is a really good example of that kind of situation for us. Dhruv Mehrotra: Well, I think it's also important to point out here is we presented the facts to the DOJ and to the FBI and gave them the opportunity to respond and to clarify what exactly happened here. Right? And neither agency did. The DOJ referred us to the FBI, the FBI referred us back to the DOJ. We haven't had any clarity on what the process was for compiling this video. Lauren Goode: Dhruv, if I'm understanding correctly, after WIRED ran its story last Friday about the evidence that showed the video was edited. A couple days later, attorney General Bondi sort of explained that by saying that there is a flaw in the surveillance system's daily cycle at Metropolitan Correctional Center. She said that one minute is missing from every night's recording, which seems like an entirely plausible explanation. But unfortunately, they've already sort of shot themselves in the foot by claiming the footage was raw when it wasn't. And now, there's this new report you just published in WIRED saying actually, there was this three minute discrepancy here too. This is really just going to fuel even more conspiracy theories, right? Dhruv Mehrotra: Yeah. So, just to backtrack a bit on that, is that when Attorney General Bondi made the statement about the missing minute, that wasn't in response to WIRED's reporting, that was in response to just a very obvious missing minute that the internet found and immediately latched onto and a reporter asked her a question about it. But what we found today does sort of call into question the explanation of the missing minute. What we found is that the first clip that comprises the full "unedited, raw surveillance video," which is made up of two clips, that first clip, the total length of the clip is four hours and 19 minutes, but the final output only used four hours and 16 minutes of it. So there's three minutes of missing footage at the end of that first clip. And it just so turns out that that edit was made basically the frame before the missing minute that Pam Bondi described occurred. Right? So we don't know what's on that remaining three minutes, but maybe it's the missing minute, maybe it's not. The problem is that the DOJ hasn't told us anything. Michael Calore: So if I'm hearing you correctly, the first clip extends until after midnight, and the second clip starts at midnight. And when they made the edit, they edited it so that the first clip ends at 11:59. So they introduced the missing minute, possibly, or they had an opportunity to cut at midnight and not release a video that had a missing minute, and they did not take that opportunity? Dhruv Mehrotra: Yeah, I don't want to dive into conspiracy here. So we don't know what's in the last three minutes of the clip that was cut. Right? We don't know if it included footage into August 10th the following day, or if it cut after the missing minute and it just was dead air or something that needed to be cut out because it was some artifact of an old surveillance camera. We just don't know, and these are all questions we brought to the DOJ and they didn't respond. Michael Calore: I see. Lauren Goode: Dhruv, what has been the most surprising reaction to this or response to this that you've gotten on the internet since your stories have been published? Are all the conspiracy folks flooding your Signal now? Dhruv Mehrotra: Yeah, I have a very active inbox on both Signal and email. Lauren Goode: Oh, God. Dhruv Mehrotra: And to be fair to everyone who emails me, there's plenty of actually really good tips in there. In fact, this second story about the three-minute cut, that started from a tip. Someone said, "You should check out this aspect of the metadata." So there's plenty of really good stuff in my inbox, but there's also a ton of conspiracies, and that's what happens when you report about Epstein. And I think in all seriousness though, the volume of responses here kind of shows how little trust there is in institutions right now. Right? The vacuum gets filled pretty fast when there's any kind of whiff of inconsistency in a case like Epstein's, where people already think the story doesn't quite add up. Michael Calore: Right. Well, Dhruv, thank you so much for coming on and telling us about your reporting on this story. Dhruv Mehrotra: All right, well, thanks for having me. Michael Calore: We're going to take a quick break, but when we're back, we'll talk about why the Epstein prison tapes have become a wedge issue for one group in particular, Trump supporters. So at the beginning of our conversation, we were talking about why the Epstein case had become this infinite source for conspiracy theories, but who is most invested in these conspiracy theories at this moment? Katie Drummond: Oh, well, surprise, surprise, it is a significant portion of the right-wing base. So, led by key figures in the MAGA movement, people have been pushing these unsubstantiated claims that Epstein was murdered and that liberal "deep state actors" in the government did it essentially to hide his client's names and all of their awful criminal activities. So I'm talking about people like Steve Bannon, Laura Loomer. Laura, thank you for sharing our reporting on social, we really appreciate the traffic bump that we got from that. So it's the usual suspects. It's probably who you would expect if you pay much attention to MAGA world. But there was this shift last week, when the DOJ and the FBI released the memo announcing that these videos would be released, and this one video in particular. The memo also concluded that their investigation into the Epstein case was officially closed and that no foul play was found. That memo also stated that the Epstein client list that Bondi had said was on her desk in February, didn't actually exist. Very special. So, as you can probably guess, MAGA figures did not take kindly to those announcements. And Trump supporters had been expecting some kind of breakthrough, some kind of conspiratorial revelation in this saga. And so they really created the fumes for this conspiracy theory that now they can't put out and is really spiraling out of control. Michael Calore: And it's interesting, because the MAGA world and the Trump brand really just thrives in conspiracy theories, and it has for a long time. They put into question President Obama's citizenship, there was the whole pizza-gate fiasco, there was the whole theory that the 2020 election had been stolen. The list goes on. So, what went wrong, if you will, with the strategy of doubling down on conspiracy theories, in this case? Lauren Goode: Yeah, this is a good question. Our colleague, David Gilbert has been doing a lot of reporting on this, also on And what's interesting is the uproar around Epstein and the fact that folks within the MAGA sort of faction are starting to turn on Trump, is really part of a death by 1,000 cuts here. There are all kinds of groups that are mad at him for different things right now, and it's being led by different powerful right-wing figures as Katie pointed out. There's Tucker Carlson, the former Fox News host, who was mad about the bombing of Iran. For Loomer, she's a noted conspiracy theorist, it was Trump's acceptance of a luxury plane from Qatar. Ben Shapiro was mad about tariffs. Joe Rogan, mad about ICE raids targeting non-criminal migrant workers. Elon Musk eventually got mad at Trump, right? He was his buddy-in-chief, recently left his role in D.C. as a special government employee and has been railing about the big beautiful bill on mine. So there are all these different high-profile folks who have been turning on Trump lately, and I think Trump has been upsetting more and more of his base, bit by bit. The Epstein saga touches a slightly different nerve, too because it is centered on accusations of pedophilia. And also, the boogeymen in this case for so long were the Democratic Party. Trump's space was extremely riled up over it. Trump himself used to say that Epstein's death was a cover-up job. Now it turns out that the MAGA folks have reason to believe, whether it's a good reason, whether it's a valid reason, but reason to believe that Trump himself could possibly be one of the figures in this so-called list. Katie Drummond: Oh, it's exhausting just thinking about it. It is also worth remembering, and I think reinforcing that these conspiracy theories aren't random. They're not sort of just being pulled out of thin air. A lot of them coalesce actually around one very special conspiracy. QAnon, You may or may not remember it. I mean, that was all the rage a few years ago. But this idea that QAnon championed, and there are still certainly QAnon adherents out there, that there is this sort of shadowy cabal of government elites. They're working to cover up a global child sex trafficking operation. This was really foundational stuff for the MAGA movement. And QAnon borrowed from a long tradition of conspiracy theory movements in the US, think about Satanic panic from the '80s, and put those on steroids. So certainly, none of this started with the Trump Administration, I don't think it will stop with the Trump Administration. It really has become sort of embedded into the way of thinking and sort of navigating the world and seeing the world for an unfortunate number of people out there, quite honestly. Michael Calore: I'm curious to know what you both think this rift among the right-wing base and the overall proliferation of conspiracy theories means for the political and technological landscape of the country. Conspiracy theories aren't new, they're not going anywhere, but what does that mean in a world where tech companies are intrinsically more embedded in the political sphere? How much responsibility do these companies hold for the spread of these theories? Lauren Goode: I would really love to hear your guys' thoughts on this because the question of, how responsible are the tech companies, is just something that comes up literally every day for us, for the ills of society. You mentioned conspiracy theories are not new. They're not. They take hold in a different way though when they're on the internet, because they spread more rapidly and because of the way algorithms can surface different content more than others. Also in the United States, we have this legal framework that actually shields internet platforms for being responsible for some of the bad content, it's section 230. And of course, a lot of the tech companies themselves don't love the fact that they may have to invest in robust content moderation systems because they look at those things as a cost center. They don't look at it as an area of growth for them, because you're actually sort of restricting in some ways what people can put on the platforms. So when things go well or are relatively pleasant in an app, because a company has put the cost into decent content moderation, no one complains, everything seems relatively peachy. When all hell breaks loose, that's when everyone says, "Wait, who's moderating the content?" And that's sort of where we are perpetually living now. And I think unfortunately, the best chance we have in the short term of curbing conspiracy theories is probably trying to educate the public so that people can better understand what is real on the internet and what is not. I hate saying that, because it just puts so much more onus on the public, on the individual consumer to figure out what's real and what's not. But I think barring any regulation in the short term, barring any major changes to the way these platforms work, I think that's the nearest and best bet. That's what I do think internet platforms should be responsible about and possibly for the algorithms, because you don't have to amplify the Jeffrey Epstein conspiracy theories on X or whatever it is that's bubbling to the surface. Katie Drummond: Yeah, I mean, you don't have to amplify it. I mean, you can also make deliberate choices to bury it. And I think Lauren, I wish I had a lot to add to that, but I just think that you're totally right. I mean, this is at the end of the day, conspiracy theories and conspiracy theorists and miss and disinformation are not going anywhere. Right? They are here to stay. It's a regulation question. It's, what are we allowing people to see on the internet? What are these technology companies facilitating with their algorithms, with the way they run their companies? And as of now, they are by and large not held accountable for that at all. And it's really hard. I try to be relatively optimistic about things, I try to always have some solution in my head. This is one where there's a futility to it as AI and sort of more and more realistic looking nonsense and often dangerous nonsense sort of floods these platforms. It's very hard to see a solution short of really aggressive regulation. That's my 2 cents. And I think education, look, I think it's a nice idea. I think if we are trying to educate and empower people who are already down the Epstein rabbit hole, for example, among the many other rabbit holes they could already be down. There's a futility to that, it feels like a lost cause, almost like a lost couple of generations of people in some cases and that's very sad. So I don't have a solution, but it is a regulation issue. And again, in this current moment in time, it's hard to imagine any of that regulation actually coming to the fore. Michael Calore: Yeah, and I mean, to add to that, so many of the platforms are now turning to AI as a solution for moderation, right? They're building these AI tools that are going to be moderating in place of humans who might be able to make those decisions about choosing not to amplify things that are possibly harmful or intentionally harmful. And I don't know, the sick part of me is looking forward to the future where the AI tool that spends so much time moderating all of these conspiracy theories starts to generate its own conspiracy theories. And then the next big conspiracy theory is one that was born of AI and we just won't know. Lauren Goode: We're probably not that far from that. I think we're about two weeks from that. Katie Drummond: I can see that happening, yeah. Coming up soon on Michael Calore: All right, let's take another break and we'll come back with the recommendations. All right. Well, thanks to both of you for a great conversation today and thanks in absentia to Dhruv. Before we shift gears to our own personal recommendations, we have an update on our end. This is sadly Katie's last week on this roundtable edition of this show with me and Lauren. Lauren and I will still be here every Thursday, but it'll just be the two of us for a while. And Katie, you will be missed. Katie Drummond: It was short, it was sweet, and I've had a fantastic time and I just want everyone to know that I am not going too far. I have something new coming, something new and exciting on this very feed, spoiler alert, that I'm very excited to launch in the very near future. And I'm sure I will be back. I think you guys run a great show, just the two of you, but I would love to make a guest appearance every once in a while. Lauren Goode: Oh, we would love that. I was already thinking, when can we invite Katie back? Katie Drummond: Oh, that's so nice. Lauren Goode: We're basically building in Marvel Cinematic Universe here of WIRED reporters and editors. So we still have the Uncanny Valley news episode featuring Zoë Schiffer and other WIRED colleagues, that publishes later in the week. We have our roundtable, of course, Uncanny Valley on Thursdays, and now we have this new project coming from Katie. You'll see us moving around a bit, but don't go anywhere, stay subscribed to the feed. Tell us what you want to hear more about, leave us reviews, and we're very excited for you, Katie. Katie Drummond: I'm very excited for all of us. Michael Calore: All right, well, on a happier note, I know we all have really good recommendations because we've all been away on vacation for a week and we've had a long time to think about it. So Katie, you go first. What's your recommendation? Katie Drummond: Oh, no. Oh my God, I have one. This is so bad, this is classic Drummond, this is so bad. I'm watching this show on Bravo, surprise, surprise, called Next Gen NYC. Has anyone heard of this? Have you guys heard of this? Michael Calore: No. Katie Drummond: Well, listen up, listen up. This is like if the Real Housewives franchise featured 22-year olds, that's what this show is. Lauren Goode: So it's Girls? Katie Drummond: No, but it's a reality show, Lauren, and it's about a group of young people living in New York City. And the best part, several of them are the children of Real Housewives OG. Michael Calore: Wow. Lauren Goode: Wow. Katie Drummond: It's fantastic. It is the best reality TV I have seen in at least eight to 12 weeks. It's very good, it's very good. I joined the Reddit community, so I've been reading up, I've been following the discourse. I mean, look, I was on vacation, but also, would I watch this during my regular work life? Absolutely. It's very good, and if anyone needs to just unplug, watch the dumbest possible thing, it's really good. I recommend it. I'm sorry that I don't have recommendations other than bad TV and butter, but that's just my life. Michael Calore: Is this why you're such a big fan of reality television, because it's your method of unplugging and just watching the dumbest thing possible? Katie Drummond: Yeah. I found maybe six or seven years ago, maybe it was during the first Trump Administration and I was obviously covering it as a journalist, as we all were, living through it, I developed this inability to watch smart, serious TV. I used to watch, I mean, my husband and I watched Sons of Anarchy, we watched Mad Men, Breaking Bad, all of those sort of 2010 classics. We watched very serious TV, and then something happened, something changed in me, and I am only able to watch the dumbest, most mindless possible TV. And I think it's just like I need to fully disconnect from anything that's going to make me feel feelings because looking at everything happening in this country and in the world and the stress of the job, and you feel a lot of feelings. And it's nice to just watch Brooks and Ava and Charlie and all of their other friends at the bar arguing about how Ava said that Adriana couldn't possibly launch her own fashion line. That's just nice. That's a nice problem for them to have. Michael Calore: Lauren, what is your recommendation? Lauren Goode: My recommendation, I have two. One is the Planet Money. Well, first of all, just Planet Money, the podcast by NPR, fantastic. I am a subscriber. They did an episode recently on the Big Beautiful Bill, it was one of the best breakdowns I'd heard of what it actually contains, what's actually going to be happening to Medicaid. I thought it was really informative. Listened to it when I was at the gym because I'm a nerd like that. Check that out. My other recommendation, which is more in the vein of what Katie is saying, like unplug, give your mind a break. Go to the movies. Michael Calore: Go to the movies. Lauren Goode: Just go to the movies. Katie Drummond: I like that. Michael Calore: This is the worst time of year to go to the movies. Lauren Goode: No, it's the best time of the year because air conditioning and comfy seats. Michael Calore: Yeah, but it's- Katie Drummond: I'm with Lauren, that's great advice. Lauren Goode: No, I've been three times this year and every time, very last minute. A friend invited me last minute to go see the 40th anniversary of Goonies that was playing downtown. We went, it was fantastic. I was hanging out with friends one night and we said, "Let's go see Sinners." It was playing right across the street, fantastic. The theater was practically empty, it was glorious. The movie itself, actually, check out our friends, Critics at Large, New Yorker pod. They had some thoughts on the Materialists, so I'm going to toss it to them, but it was great. I was like, I need to go to the movies more. Michael Calore: Oh, for sure. Lauren Goode: What's your recommendation, Mike? Michael Calore: I'm going to recommend a book, and this is a book that I read over 4th of July weekend. It's called, I Cheerfully Refuse by Leif Enger. I believe this is Leif Enger's fourth novel. He's a bestseller, you may have heard of his name before. This is his new book, it is dystopian fiction. It depicts a world a few decades from now in which society has crumbled in a way that feels very recognizable and familiar, a bit like a more dangerous and uncertain version of today. The entire economy is controlled by a handful of super rich elites. The education system is crumbled, most Americans are proudly illiterate. We have a proudly illiterate president in this book. Satellite communications have been enshittified, are totally unreliable, GPS doesn't work anymore. It is just like an eroded version of the world that we live in, and it's really starkly rendered. We drop into this world and we follow the main character on a quest. The whole book takes place on Lake Superior in northern Minnesota and western Ontario. The main character gets in a boat and he goes and he sets sail on Lake Superior and we follow him around. I'm not going to spoil it by saying anything more than that, but it is gripping and unpredictable and also just beautifully, beautifully written at the sentence level. It is like poetry for pages. It's amazing, emotional, deep. It will enrage you because it is a book for this moment. It's just gorgeous. Lauren Goode: I don't know what to say to that, except that it sounds really deep. Katie Drummond: You are so much more sophisticated than both of us. Sorry, Lauren. Michael Calore: Well, I mean, not really. Lauren Goode: I accept this. Michael Calore: No, I mean, I know I recommended a nerdy book, but you should really read it just because it gives you a really sharp, sort potential future of what it's like if you just let the richest people in the world run the economy and run all of the basic services that we rely on, to the point where they just fall apart because the most important people don't need them anymore and it's the rest of us who have to suffer for it. And it's like, it's kind of grim, kind of feels like that's the way the world is moving, and that's the reason why the book resonated with me so much when I read it. Yeah. Lauren Goode: I'm going to add that to the good reads. Thanks so much. Michael Calore: Of course. Lauren Goode: Yeah. I almost recommended a book by a philosopher, but I'm going to hold off and keep it lowbrow for now. Once Katie's gone, we can just lit nerd out, Mike. Michael Calore: I don't know. I'm going to go watch Goonies. I don't know. Lauren Goode: Welcome to WIRED's Lit Nerd podcast. Michael Calore: All right, well thank you for listening to this episode of Uncanny Valley . If you liked what you heard today, make sure to follow us on our show and rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, write to us at uncannyvalley@ Today's show was produced by Adriana Tapia. Amar Lal from Macrosound mixed this episode, Pran Bandi was our New York studio engineer. Mark Lyda was our San Francisco studio engineer. Kate Osborn is our executive producer. Katie Drummond is WIRED's Global editorial Director, and Chris Bannon is the Head of Global Audio.

Seriously, What Is ‘Superintelligence'?
Seriously, What Is ‘Superintelligence'?

WIRED

time20-06-2025

  • Entertainment
  • WIRED

Seriously, What Is ‘Superintelligence'?

In this episode of Uncanny Valley , we talk about Meta's recent investment in Scale AI and its move to build a superintelligence AI research lab. So we ask: What is superintelligence anyway? Meta AI at the Meta pavilion ahead of the World Economic Forum (WEF) in Davos, Switzerland, on Saturday, Jan. 19, 2025. Photo-Illustration: WIRED Staff; Photograph:Meta just announced a major move in its AI efforts—investing in Scale AI and building a superintelligence AI research lab. While Meta has been trying to keep up with big names in the AI race, such as OpenAI, Anthropic and Google, the company's new strategy includes dropping some serious cash to acquire talent and invest in Scale AI. Today on the show, we dive into the deal between Meta and Scale AI, including what Meta aims to get out of investment, and we ask the question we are all wondering: What is superhuman intelligence, anyway? You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: How is everybody doing this week? Katie Drummond: Well, I'm back and I'm so happy to be here. I ate so much butter in France last week, the first couple nights I had to myself. And so what would a normal person do in Paris? Maybe they'd go out and sit at the bar and have dinner alone, maybe they'd meet up with a friend. I ate butter and bread alone in my hotel room. And let me tell you, if you're listening out there and you are a mom and you have a young kid or young kids, if you're married, if you struggle with having a spouse and a child and maybe some pets and a busy job, there is no better experience than eating French butter and bread alone in a hotel room. Michael Calore: Wow. Lauren Goode: Uncanny Valley , brought to you by the dairy lobby. Katie Drummond: By the French dairy lobby. I feel incredible. How are you guys? Lauren Goode: I'm OK. I got bangs. Katie Drummond: You did? Lauren Goode: Yeah. So most of our listeners can't see it unless you watch our new video promos online, but I got bangs. There's often a correlation between things going on in the world and women cutting their bangs. That's all I'm going to say about that. But otherwise, I'm great. I'm great. Rate the bangs, go online. Thumbs up, thumbs down. Katie Drummond: Five out of five. Lauren Goode: Thank you. Katie, really, you were my inspiration here. Katie Drummond: Oh, no, that's too kind. But I do love a bang and hate a forehead. Michael Calore: Well, have I got a haircut for you. Lauren Goode: Well, Mike, how are you doing? Sorry about that, Mike. Katie Drummond: But there's just so much smarts in there, Mike. That's the thing. Michael Calore: I'm just as God made me. Katie Drummond: How are you doing? Michael Calore: I'm doing great. I don't have any hair stories. I haven't eaten any butter recently, so I'm feeling really left out right now. This is WIRED's Uncanny Valley , a show about the people, power and influence of Silicon Valley. Today we're talking about Meta's recent investment in Scale AI, and its move to build a superintelligence AI research lab. It's the latest effort from Meta to compete with the big names in the AI race like OpenAI, Anthropic, and Google. But Meta is taking a different approach. Not only are its AI models open source, but in typical Meta fashion, it seems to be trying to outspend its competitors to acquire top talent, and its Scale AI investment, which is not an acquisition, is just part of that strategy. We'll dive into what Meta is hoping to get from this investment and what it's actually getting, and whether this move could give the company the competitive advantage it's seeking. Plus we will ask, what is superintelligence anyway? I'm Michael Calore, director of consumer tech and culture here at WIRED. Lauren Goode: I'm Lauren Goode. I'm a senior correspondent at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's global editorial director. Michael Calore: So let's start off by diving into Scale AI. Unlike Meta, the company is not what you would call a household name anywhere outside of Silicon Valley, but it's certainly made a name for itself in the AI world. What's up with Scale? Lauren Goode: Scale AI is a data labeling company. Sounds very sexy, doesn't it? They do the grunt work of analyzing and categorizing the data that is later distributed to larger AI models in order to train them. It's nothing fancy, but they kind of perform an essential function for machine learning programs to improve. And they have some pretty big customers. Companies like OpenAI and Google have been among their clients. Earlier this year, our colleague Will Knight reported on Scale AI's new platform that allows AI models to be automatically tested against key benchmarks and pinpoint any weaknesses in the models. So basically Scale AI has been making this concentrated effort to be a key partner as it's working with these larger AI companies. Katie Drummond: I think it's also worth pointing out that really at the heart of the company's success is its founder, Alexandr Wang, without an E, right? Michael Calore: Yeah. Lauren Goode: Yeah. Michael Calore: He's the Flickr of CEOs. Katie Drummond: There you go. He's 28 years old, and at one point in time he was actually the youngest self-made billionaire in the world, which is pretty incredible. So he's a well-known personality in Silicon Valley, and he is arguably best known for how much he networks, which given that I do not live in San Francisco, Lauren and Mike can probably tell you a lot more about that. But he was once roommates with Sam Altman, who, supposedly, allegedly, told him to tone down the networking a notch. Unclear exactly what the motivations may have been there, but Wang clearly did not care for that advice and it has paid off for him. So this latest deal between Meta and Scale AI came actually after Wang and Zuckerberg reportedly spent a lot of one-on-one time together networking, presumably. And so it is really his relationships with so many key players in the AI industry that has positioned him now to be so powerful and to be in demand by a company like Meta that, as we'll talk about a little later, could really use a leg up in AI. Bloomberg recently reported that Wang is known for calling "a dizzying amount of people," and not just senior, but junior staffers in AI firms, to know what they're working on and what they want to do. So he's not only a networker, but he's someone who keeps his ear to the ground. He's sort of in the know in Silicon Valley as far as AI goes. Lauren Goode: It sounds exhausting. Katie Drummond: It does. I felt tired just talking about that. Lauren Goode: Yeah. Also, what does networking really look like in Silicon Valley? It's like, do you … Katie Drummond: Well, you guys tell me. What do you do? Lauren Goode: Yeah, I don't know. Meet some folks at Blue Bottle before you head on down to Marine Layer and then go to some all-night coding party in the carriage house of a Pacific Heights mansion? I don't know. What does that look like? Michael Calore: It's a lot of walk and talks. Katie Drummond: Is it really? Lauren Goode: Well, in the Valley it is. Michael Calore: Yeah. Lauren Goode: Down around Palo Alto, people are in … Do people do that in San Francisco too? Michael Calore: Oh, yeah. Lauren Goode: Yeah? Yeah, that's a thing? Michael Calore: Yeah, people do walk and talks. Katie Drummond: Speaking of walking and talking, Mike, I want to hear more about the details of the deal. Michael Calore: Yes. Let's talk about how much money is on the table on this deal, plus what Meta is hoping to get from the investment, and perhaps most importantly, what it is actually getting out of this deal. Lauren Goode: So the deal was announced as a 14.3 billion dollar investment in Scale AI. To be clear here, this is not an acquisition. They're just taking a 49 percent stake in Scale AI and also bringing in Alexandr Wang and a bunch of talent, but it's not an acquisition, folks. Don't call it that. We've seen a wave of this over the past 12 to 18 months where Microsoft and Google have also either made strategic investments or licensing agreements with smaller AI players in order to sort of bring them into the fold, but not necessarily face the scrutiny of the U.S government and the Justice Department, because they want to move fast here. They want to build these AIs as quickly as possible, and so they want to just bring in all this talent. And so yes, we are lumping Meta in here, but that is essentially what they're doing with Scale AI. Now for Alexandr, our master networker here, this deal also includes a leadership role in this superintelligence project that Meta is going to be building. We're going to get more into that later as we talk about superintelligence and what exactly it is. Zuckerberg, Meta now owns a strategic 49 percent stake in Scale AI. When it comes to what Meta is really getting from this deal, the main asset is the vast amount of AI training data that Scale AI possesses, and that leads to what Meta hopes to get to in the long run, a boost to the development of its Meta AI projects. Katie Drummond: And I mean certainly Meta is doing a lot of clever things here, you could say, one of which is that they have put their competitors in a pretty tricky spot when it comes to whether or not they should continue working with Scale AI or whether they should move away to other companies. And Scale AI has other partnerships going on outside of the AI ecosystem as well. So the company has deals with foreign governments in Asia and Europe, it has a deal with the DOD actually for a first of its kind AI agent program, which sounds as dystopian as it probably is. It's called Thunder Forge. And the goal is essentially to enhance military operations with AI agents. I mean, it's exactly what you think it would be. And Meta could potentially benefit from all of those alliances that Scale AI has as well. So certainly you could say a mutually beneficial arrangement and one that puts a lot of players in the AI space on watch as it pertains to Meta. Lauren Goode: I mean, I think the important thing to remember about Meta is that Meta buys its way into innovation. It doesn't spin it up entirely on its own. If you look at WhatsApp, if you look at Instagram, if you look at Oculus, all acquired by Meta. And then sometimes when Meta does try to spin up some vision for the future on its own, like the Metaverse, which is all we heard about two years ago, it falls flat on its face, or face computer, as we like to say. What Meta is doing here is it's locking up not only the data from Scale AI, but it's also locking up talent and technology that it believes is critical in sort of moving forward into this next phase of AI. An oversimplified way of looking at it would be, you need really two things to level up an AI right now, you need more compute power and you need a lot of data, and ideally high quality data. This is clearly a data buy, and it's also a way of keeping other companies from potentially using it. Like Katie just said, Google is now backing out of its partnership with Scale AI. Michael Calore: And Meta is doing this because it has a long way to go in order to reach the top of the heap, right? We've been talking about how Meta has been stumbling over the last few years in the AI race, so I want to dig into that a little bit, especially in how it has been lagging behind its competitors and what it's trying to do specifically to get ahead with this investment. Lauren Goode: That's right, with this strategic investment, Mike. The short answer to that is Llama, which is its foundational model, its answer to OpenAI's ChatGPT and Google Gemini. Unfortunately so far, Llama hasn't really lived up to the hype around it. It's not as powerful as the rival systems of OpenAI and Google. Notably though Meta has tried to differentiate itself by open sourcing Llama, meaning that it freely shares the underlying code with outside software developers and businesses. So I guess it believes in the long run that that is actually going to be beneficial to it, even if right now it's not winning the speeds and feeds. Michael Calore: Can I ask you, what's the prevailing wisdom on why Meta decided to release Llama as an open source model as opposed to keeping it closed like all of its competitors? Lauren Goode: Yeah, it's a good question. I wish I could just call up Mark and ask him directly about this, but I can't, and we don't know exactly what's going on in his head. But I think in any robust technology environment, you're going to have lock-in and you're going to have open source providers, whether you're talking about IBM mainframes and then Linux, or you're talking about Apple software versus Google Android, and I think Google Android is the most used operating system in the world, and various products and business models and licensing models have extended from that. And so that may be part of what Mark Zuckerberg is thinking about long term. I think he also feels very burned by Apple. I mean, he has said that specifically when he wrote a blog post last year making a case for why Llama is open source. He said to do this well, we have to ensure that we always have access to the best technology and that we are not walking into a competitor's closed ecosystem where they can restrict what we build. And he said that one of his most formative experiences, building Meta services, has been having those services constrained by what Apple will let Meta do on its platforms. So as he's thinking about the future of AI and what happens over the next several years or beyond that, I guess he doesn't want Meta to be locked in. Michael Calore: Yeah, it feels like all of Meta's AI efforts have been very much on the surface, sort of lightweight consumer facing things, right? They have the Meta AI app that you can use to chat basically like a chatbot, they've incorporated their AI into the smart glasses and they've made these chatbots that live inside of Instagram and allow you to talk to somebody who talks like Snoop Dog basically. There's also the AI assisted search that has rolled out to all of the different Meta projects, but it really feels like this sort of social play in AI, specifically chatbot AI, has been the thing that they have just been concentrating on. They don't have advanced systems that corporations can license. They don't have a lot of the big ammunition that the other companies have. Katie Drummond: Well, and the consumer facing AI, I mean if I may be so bold, is not particularly good or well-designed, or well-made, or a particularly good user experience. I mean, I don't have a Facebook account, and I haven't for a very long time. I do use Instagram. I mean, I have not once used that chatbot. So these consumer-facing bets have not been particularly successful in addition to not being very good, but they've also recently had some pretty significant blunders on the privacy front. Surprise, surprise, Meta having issues with privacy. Our colleague, Kylie Robison, reported that the app actually showed … This is insane. It is just insane. The app showed private conversations between the chatbot and users, including medical information, home addresses, even things directly related to pending court cases where people were talking to the chatbot, asking it questions, getting help with whatever weird sick question they were trying to answer or problem they were trying to solve. And unbeknownst to them, those conversations were showing up in a social feed where everybody could see someone trying to break their tenant's lease or access some pornography or get a medical diagnosis for the weird lump on their foot. All of this stuff that, yeah, should you be careful in your conversations with any chatbot? Absolutely. But most people at least assume that those dialogues and those back and forths aren't just going to be published onto the internet. I mean, it's a pretty stunning failure. And it certainly doesn't feel accidental, it doesn't feel like they just were like, "Oh, how embarrassing that this has happened." I just don't think they really care. And this has been obviously the narrative around Meta for a very long time, which is, problems arise with their services, particularly as it pertains to privacy, and somebody out there in media reports on it, and then they patch it up, and that's exactly what has happened this time. I think they've put a disclaimer on top of the chatbot so that you know that you're opting in or opting out of some kind of public sharing of your conversations. But it just sort of feels like they're not particularly interested in thinking very hard about the privacy piece, which, in my opinion, when it comes to chatbots and AI and sort of this brave new world we are all marching forward into or being marched forward into, in some cases, we should probably spend some time thinking about privacy considerations. Now, of course, with this deal with Scale AI, Meta is hoping to turn the page, to open a new chapter within all of their AI efforts, consumer facing or otherwise, specifically with this superintelligence AI lab. Michael Calore: That's something that is really important that Meta would not be making all of these big statements about superintelligence. It would not be making this investment if it didn't feel like it had already failed in the race to develop strong AI programs. Because this is not just like a reorganization or realignment, this is a complete reset. It is a brand new team with new leadership and a new mission. For a very long time, the person who has been heading up Meta's AI efforts is Yann LeCun, who is a very well-respected Silicon Valley guru in AI. He has won the Turing Award, which is the nerd trophy for AI engineers. But LeCun famously does not think that artificial general intelligence is something that is on the immediate horizon. He's not a big chatbot proponent. He sees the value in large language models, but that's not where his interests are. And I think that if you're going to go full into, OK, what is the next thing? Then you need new blood. You need people who are true believers in this next phase, which, of course, is being defined by the term superintelligence. Lauren Goode: Superintelligence. Michael Calore: Let's take a break and then we'll come back and find out what that means. Welcome back to Uncanny Valley . Before the break, we were mentioning that a key aspect of this deal between Meta and Scale AI, beyond all the big bucks involved, is the creation of a superintelligence AI lab. So question for the group, what the hell is superintelligence? Katie Drummond: Lauren? Lauren Goode: Oh, no. Well, it basically refers to developing an AI that goes beyond the human brain. It's a little bit unclear exactly what that means. For the past couple of years we've been hearing about AGI, which is artificial general intelligence, now we're talking about superintelligence. My understanding is, based on talking to researchers and technologists about this, is that it's not a flip the switch moment, it's not like there's going to be a specific model or some product release where all of a sudden we say, "Oh, we're living in AGI," And, "Oh, the next step is superintelligence." It's all sort of happening on a continuum. But the idea is that it is having something that is as smart as a human being or the human brain in your pocket on your phone, which is just crazy. It's not going to be sentient, it's not going to feel emotions, but it's going to do such a good job of replicating all of that that you're going to feel kind of blown away by it. And maybe that's bad news for us humans, I don't really know. We do know that the term superintelligence was popularized by the Oxford philosopher, Nick Bostrom, who, in 2014, wrote a book on superintelligence. And he broke down a future where AI would advance to a point where it could turn against and delete humanity. And now, of course, it's being used by Mark Zuckerberg. So metaverse, superintelligence, I think we have a sense at this point of what's on Mark Zuckerberg's bookshelf. Katie Drummond: And I mean, I think that whole question of sort of, what is AGI? What is superintelligence? I mean, so much of this is just branding. This is marketing that the AI industry is using to evoke a sense of, I think, sort of intimidation and fear and awe and respect and deference on the part of the general public, of policymakers, lawmakers. I mean, they want this thing to feel like this next era is right around the corner and we need to get ready now. And if we're not ready before China's ready, it's just going to be catastrophic because it's AGI. It's like, but what actually is that? And I think Meta is doing something super interesting and sort of cynical here, in my view, which is by positioning this new lab as superintelligence, they're essentially saying, "AGI? AGI is so last year. We're not even thinking about AGI. We're just jumping all the way to superintelligence." I mean, this is marketing through and through. I'm not saying that this technology isn't evolving, that it won't drastically improve over time, that we won't see, and we already do see, AI that's capable of doing things that people can do. We see that all the time. There are plenty and plenty and plenty of examples of that. But I think that these terms are being used in a very squishy, opportunistic way for industry leaders and executives to sound and to make their technology sound as impressive and as valuable and as intimidating as possible. That's what I think. Michael Calore: It is good marketing for recruiters too. If you're a person who's a professional in the AI industry, you're already pretty well paid, you feel like you're working on the next big thing because you're working on AGI somewhere, and then all of a sudden it's like, yeah, but wouldn't you rather be working on superintelligence? Lauren Goode: Yeah. Yes. Michael Calore: So who else is working on superintelligence? Are there other AGI companies that are like, "OK, no, wait, now we're doing that too." Are there people who have been working on superintelligence for a little while? Lauren Goode: It depends on which AI visionary you're listening to. Sam Altman from OpenAI still seems pretty focused on AGI. He has said that he thinks it will be reached before the end of Trump's current presidential term. Dario Amadei from Anthropic has said that he thought AGI would happen in the next two years. So those are kind of still the AGI guys. On the other hand, you have Ilya Sutskever, who was the former chief scientist at OpenAI. He's cofounded a company called Safe Superintelligence, and they have a different approach. They are privately building superintelligence, and they say they will only release this technology to the world when it is deemed safe. So I think they're all kind of working towards the same thing, but superintelligence is the latest buzzword. Katie Drummond: I love that they're going to be in hiding for many years. Michael Calore: Yes. Katie Drummond: I can't wait to see them crawl out of their bunker with their safe AI. Michael Calore: Yeah, they're keeping it chained in the basement. Katie Drummond: It is worth noting, Ilya aside, not all AI leaders are into the superintelligence hype. So these are people who have historically kept a lower profile, but more of them have actually started to speak up recently, which I think is notable. So Thomas Wolf is one example. He's Hugging Faces' cofounder and chief science officer. He called some parts of Amadei's Vision, "wishful thinking at best". That would be the vision of AGI. And Demis Hassabis, the CEO of Google DeepMind, has reportedly told his staff that in his opinion, the industry could be up to a decade away from developing AGI, noting that there is a lot that AI simply can't do today. So again, that's skepticism around AGI. We're not even talking about superintelligence. So God knows when that's going to be ready. I guess when Ilya lets us know. Michael Calore: Let's talk about all of the people involved here. You really need all the top talent in order to really compete if you're going to build anything regardless of what you're calling it. And in this investment that Meta made with Scale AI, they get Alexandr Wang, comes to the company, he's bringing key people with him from Scale to work with him at Meta in the superintelligence lab. And this is happening at a time when AI talent is in super high demand with all the leading engineers being offered millions and tens of millions of dollars a year to work at the big companies. And apparently Meta has been offering up to nine figure compensation packages to get people to come work in the superintelligence lab. Katie Drummond: Hold on. Can you just articulate nine figures? So that's not hundreds of thousands, it's not millions, it's not tens of millions, it's hundreds of millions? Michael Calore: It is over $100 million. Lauren Goode: What? Michael Calore: Yes. Katie Drummond: I mean, I'm speechless. I knew about the seven and eight figures, which is also just jaw dropping, but nine figures is unreal. That is unreal money. Lauren Goode: I'm literally like, I got to read about this now. What? That is so crazy. I'm going to revise what I said earlier where I said, "Oh, the companies trying to level up in AI right now are looking at compute power and they're looking at data." They're also looking at talent. That is a huge, huge part of this. I'm still speechless that this is how much money these folks are getting offered, but I guess that's what Meta feels that it needs to do. I wonder how this is going to be reflected on its next earnings statement. We also saw that earlier this week, a longtime machine learning engineer and research scientist at OpenAI just got moved into a new position as the head of recruiting at OpenAI, which is fascinating. I mean, really, the initial reaction is kind of like, huh, that's an interesting career change. But then you think about, oh, this guy is going to talk the talk. He now has to go out and recruit top, top talent for OpenAI, continually recruit the top talent for OpenAI, and they now need to compete with Meta offering millions, bajillions of dollars to engineers. So there is indeed, I think, a race for talent happening right now. Sequoia Capital investor, David Kahn, just wrote a blog post about this that I was reading where he did say that talent is the new bottleneck in AI, and he likened it to basically building a sports team. They're all backed by some mega rich tech company or individual, the star players can command these crazy pay packages in the tens of millions or hundreds of millions of dollars. Unlike sports teams though, where players often have long-term contracts, AI employment agreements can be short-term and liquid, which means that anyone can be poached at any time. Katie Drummond: It's fascinating, it's grotesque. I want to know everything about how much these people are being paid. I want to know everything. And for Meta, it will be interesting to see how much money it takes for them to get top talent. I mean, money matters. If someone offered me a hundred million dollars to work for someone that I wasn't so excited about at a company that I thought had a so-so track record overall and a pretty poor one around AI, I mean, a hundred million dollars moves the needle. And it will be very interesting to see whether the Anthropics and OpenAIs and Googles of the world can compete. I mean, we know that Zuckerberg is doing a lot of this recruiting himself. He's personally reaching out to candidates. I wonder, and we will find out, whether that is to the company's benefit or not. Lauren Goode: I mean, independent of how you might feel about Mark Zuckerberg too, Meta is just … It's been around for a while at this point. It's a twenty-year-old company. It's publicly traded. You'd probably get some really nice equity package on top of that too. But when you join a rocket ship like an OpenAI, you're joining because you think that if you get in early enough … At this point, it's not even that early, but that at some point you're going to become a multi, multi, multi-millionaire if that company either accelerates or it sells or something like that. And then that creates the flywheel effect that we always see in Silicon Valley, right? Early Google employees who left and went and started other things. We're going to see this wave eventually of OpenAI folks who leave and start other things. But if you're getting a comparable offer from OpenAI, or Meta at this point, which one are you going to go to when you're thinking about, really, the future? Michael Calore: And I mean, there are a lot of people who won't take that money. I mean, if you think about the type of people who are commanding these super high salaries, they have been paid very, very well for a number of years, maybe their company was acquired and they had a big payout from that, so now they're sitting comfortably in a position at Anthropic or at OpenAI. And also the project that they're working on is something that matches their skills and maybe they want to see it through. So there's a bit of ego involved, there's a bit of life decision involved, and there's ethics involved, like, do you actually want to go work for Meta? Do you want to go build this thing that they're building? Especially after they made the announcement that they're going to start allowing their technologies to be used by the Department of Defense and they're going to start doing war stuff with the AI tools that they're building and the XR tools that they're building. So yeah, I think there's a lot of people who are just sitting pretty right now and they have to decide whether or not that money is worth it to them as people. Lauren Goode: Do you guys ever spend time on the app Blind? Michael Calore: No. This is the one where people talk about what it's like to work at a company? Lauren Goode: It is about working for a company because you have to affiliate yourself with a company when you sign up for the app, but it's all topics. And oftentimes it's people coming to the group with a compensation package and saying, I got offered this from Meta or Amazon and what should I do? And you just realize how distant this world is sometimes from the way the rest of the world or the rest of the country lives. People who are like, "Well, I don't know. Should I take this $700,000 package from Amazon? And that's without stock equity, benefits, or should I take this other comparable package from a similar company, but they're going to allow me to work from home? And I don't know though because I'm 40 and I only have 10 million in retirement." I'm like, oh my God, that is the world we're talking about. Michael Calore: Yeah. Katie Drummond: We need to do more reporting on this. I think that the compensation of people in Silicon Valley is fascinating. Lauren Goode: Well, if anyone would like to weigh in, if you're a recruiter, if you're a person who's been made one of these offers from the Meta superintelligence lab, we want to hear from you. Michael Calore: Big money. Katie Drummond: We sure do. Lauren Goode: Our signals are out there. Michael Calore: Big money, no whammies. Lauren Goode: Now we know what Katie would leave us for to go work for Mark Zuckerberg. Katie Drummond: A hundred million dollars is a lot of money. It's a lot of money. Lauren Goode: It's a lot of money. Katie Drummond: That would be tough for me. I don't think I could do it. Lauren Goode: Yep. If you invest it, well, it'd be a lot of money for your kids' kids' kids. Katie Drummond: I know, but then I'd have to tell my kid what I do, and I don't know that I could do that. I'm being totally honest. I don't think I could do it. Let me be clear, there are a lot of fantastic people who work at Meta. I mean, this is not a repudiation of anyone's decisions or career choices or where they have chosen to work, given my background and what I do for a living, yeah, I don't know. I don't think I could do that. Michael Calore: You get to be part of the superintelligence revolution. Katie Drummond: I don't want to. Michael Calore: Maybe just use the chatbot and then you can feel like you're a part of it. Katie Drummond: Yeah, there you go. I have some pressing and highly personal questions for Meta's chatbot, and as soon as we get off this recording, I'm going to go ask all of them in private. Michael Calore: I look forward to reading them on the [inaudible 00:30:11]. Lauren Goode: Katie's like, how do I extract myself from a work project that has me locked in a room for two hours every week? Katie Drummond: Oh dear. Michael Calore: OK, let's take another break and we'll come right back with recommendations. All right, thank you both for a great conversation about superintelligence. So I think it's time to give our listeners something from our own superintelligent human brains, our recommendations for the week. Lauren, would you like to go first? Lauren Goode: Sure. I recently learned that by using generative AI tools like ChatGPT, you can get your color analysis done. Have either of you ever done this? Michael Calore: No. Katie Drummond: No. Lauren Goode: So this is a thing that is part of the beauty influencer world online where typically you would pay someone, sometimes a human, sometimes an app that has human input, to analyze the color of your hair, skin, eyes, skin tone, all that, and tell you what season you are and then tell you what clothing you should wear in a way that accentuates your whole situation. Katie Drummond: Did you find this revelatory? Lauren Goode: Yeah. So recently when I was hanging out with some friends, one of them had had her color analysis done and we were talking about it and she said, "Oh, you can just do it on ChatGPT." And I was like, "What? You don't have to pay someone a couple hundred dollars to do this?" So she uploaded our photos into ChatGPT and I got a color analysis done. And so I'm a deep autumn, in case anyone wants to know. Michael Calore: It's a burning question. Lauren Goode: That was the burning question. I thought maybe I was a winter, but I'm a deep autumn. And so I think to date, this is the most interesting use case of ChatGPT I've experienced so far. Michael Calore: Ever, of all … Lauren Goode: No, that's not true. Michael Calore: Of all your experiences. Lauren Goode: That's not true. The other day it told me how to cook salini mushrooms instead of cremini mushrooms. I'm quite certain salini mushrooms don't exist. So it was helpful in that regard too. Katie Drummond: Yikes. Lauren Goode: But no, I've used it for other things too. I've used it for research and reasoning and fun things like that. Michael Calore: That's pretty good. Lauren Goode: So maybe later, actually, I'm going to upload photos of you both to ChatGPT and ask it to do your color analysis. Michael Calore: I do not consent. Lauren Goode: And then I'll tell you … OK. What color you should be wearing. Both of you are wearing all black right now. Katie Drummond: As usual. Michael Calore: Yeah, I was going to say. I call those days weekdays. Lauren Goode: Anarchists in the room. I love this. OK, that's it. That's my recommendation. Katie Drummond: I was thinking about what to recommend and it was either going to be a book, a kid's movie or food, and I was like, no, girl, you've done all of those already. You have to think of something else. I have an AI-related recommendation actually, which is, I had someone come recently to help me out with my outdoor plants, like our little backyard area in Brooklyn, and I was telling him that all of my indoor plants are dying and struggling, and I was just so confused. I was like, which window should this one go in? And what … Blah, blah, blah, blah, blah. How often do I water all of these things? I have many plants. And he recommended this app called Picture This. And you take photos of all your plants and you upload the photos into the app, it tells you what kind of plant it is, it tells you how often to water it. You can use your phone to show the app how much sun is coming in through the window, and it'll tell you if that's enough sun, too much sun, not enough sun. Oh, and you can take a photo of the plant and it'll tell you if it's sick and what it is struggling with, which is very upsetting, but very helpful. I have many struggling plants. But it's very, very cool. It's definitely highly judgmental. I get push notifications now saying, "Don't you want to take care of your plants?" And I'm like, well, I do, but I also have a job. It's a very, very interesting product, and if you have plants, I highly recommend it. Lauren Goode: Picture this. Katie Drummond: Picture this. Healthy plants, healthy you. Superintelligence, picture this. Lauren Goode: Mike, what's your recommendation? Michael Calore: I think given the current geopolitical state of the world, it is time to re-watch Dr. Strangelove , the 1964 film by Stanley Kubrick. Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb . It's about escalating political tensions based on a misunderstanding that leads to nuclear war. It's a farce. It's very funny. It's also very dry, but it's a great movie. I re-watch it about once a year, unfortunately. I'm usually compelled by current events to watch it. So I would say it is a good time to watch Dr. Strangelove . And I'm not saying the world is going to end, but whenever we start talking about nuclear power and we start talking about the people in the world who have their fingers on the button, it is important to remind ourselves that these are human decisions that people make about our future, and it's a great movie to help you process that information. Katie Drummond: Wow. Lauren Goode: So Katie's watering her plants and I'm cutting bangs and you are watching movies about the end of the world. I see who among us is really diving in and who's … Michael Calore: So a lot of people ignore the void. A lot of people acknowledge it. I'm one of those people who puts my face right up against the glass and just scares at it. Katie Drummond: That's just like my husband. Not to make it all about him, but … I feel like I cover the void. I do the void for my job. Every day, I'm in the void. So when I'm out of the work void, I want to go into la la land. And so last night I was telling people in Slack today, my husband was texting me links to used motorboats because he was like the safest place for us to be during a nuclear attack is in the middle of a body of water. Lauren Goode: Good lord. Katie Drummond: And I was like, and what exactly do you think we're going to be doing in this little motorboat in the middle of the Atlantic Ocean off the coast of New York City? I think if it comes down to motorboat or death, we are probably going to die. Lauren Goode: Yeah, yeah. Katie Drummond: Just saying. Lauren Goode: That was before bed, like before you were supposed to go to sleep? Katie Drummond: Right as I was lying in bed trying to go to sleep and the thing is like ping, ping, links to used boats. Lauren Goode: I mean, I think we're all sort of void adjacent these days, so you can't really ignore it. Katie Drummond: You can't ignore it, but it's a choice to lean into it in your personal time, I will say. Michael Calore: I would say that if you're going to do that exercise, there are fewer people who it's more delightful to do it with than Peter Sellers and George C. Scott and Sterling Hayden and Stanley Kubrick. It's a great movie, so stream it tonight. Katie Drummond: We'll link to it in the show notes. Michael Calore: Thanks for listening to Uncanny Valley . If you liked what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, write to us at uncannyvalley@ Today's show is produced by Kyana Moghadam and Adriana Tapia. Amar Lal at Macrosound mixed this episode. Jake Lummus was our New York studio engineer. Daniel Roman fact-checked this episode. Jordan Bell is our executive producer. Katie Drummond is WIRED's global editorial director, and Chris Bannon is the head of global audio.

Why Silicon Valley Needs Immigration
Why Silicon Valley Needs Immigration

WIRED

time09-06-2025

  • Politics
  • WIRED

Why Silicon Valley Needs Immigration

A general view of the UC Berkeley campus, including Sather Tower, also known as The Campanile, as seen from Memorial Stadium in Berkeley, California. Photo-Illustration: WIRED Staff; Photograph:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Expanded deportations, a virtually shutdown asylum process, increased scrutiny of H1-B visa applicants—immigration policy has been overhauled under the latest Trump administration. And, just last week the Trump administration said it would begin revoking the visas of some Chinese students who are currently studying at U.S. schools. On today's episode, we dive into the impacts that these changes could have on the tech industry from the talent pipeline to future innovations. Articles mentioned in this episode: The Trump Administration Wants to Create an 'Office of Remigration' by David Gilbert US Tech Visa Applications Are Being Put Through the Wringer by Lauren Goode You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: A quick note before we begin today. We recorded this episode before the Trump administration's travel ban on citizens from 12 countries from entering the United States and before its proclamation to suspend all new student visas for students enrolling at Harvard University. Although we will get to student visas quite a bit in this episode. How's everybody doing this week? Lauren Goode: I'm good. I just got back from Katie's motherland, Canada. Michael Calore: Oh. Lauren Goode: Yeah. Katie Drummond: Lauren and I were in Vancouver together. Lauren Goode: We were. Katie Drummond: Although I saw her for probably 15 minutes in the span of like five days. I'm doing okay. I also, as we just established, was in Vancouver with Lauren at Web Summit. I took a red-eye home on Thursday night and it was three hours late and so that was a lot. Michael Calore: Yikes. Katie Drummond: And then Lauren, right before we started recording just told me that I have a bobble head, so I'm just grappling with that feedback. Lauren Goode: I did not say bobblehead, I said you had celebrity energy because your head presents well on camera. I don't know. Mike, how are you doing? Katie Drummond: Yeah, how are you doing, Mike? Michael Calore: I'm staying out of this one. Also, I have a gigantic head. I can tell you that I wear a size eight fitted cap, which is the largest size that they make. Katie Drummond: Do you want to know what size I wear? Michael Calore: Yes. Katie Drummond: I have to shop at a specialty hat store. Because my head actually doesn't... I can't wear. Lauren Goode: What is this store called? Katie Drummond: I can't wear normal hats. Lauren Goode: Is it called Bobblehats? Katie Drummond: No, I'm going to look it up. It's from Oddjob Hats. The last hat I bought was called Big Running Hat. Just Big Running Hats. Lauren Goode: Do you also have one called Big Walking Hats? Katie Drummond: Probably. Probably. Lauren Goode: Oh. Michael Calore: Oh, it's too much. Lauren Goode: All right. Michael Calore: Should we get into it? Katie Drummond: Let's do it. Lauren Goode: Let's do it. Michael Calore: This is WIRED's Uncanny Valley , a show about the people, power, and influence of Silicon Valley. Today we're going to be talking about the Trump administration's policies around immigration and the effect that those policies are poised to have on the tech industry. Since day one of the current administration immigration policy has been overhauled, the asylum process was virtually shut down, the obscure Aliens Enemy Act was invoked to deport hundreds of people, and birthright citizenship is being challenged in the US Supreme Court. Visas have been under increased scrutiny. WIRED recently reported how the H-1B visa application process is becoming more hostile, and last week the administration said it would begin revoking the student visas of some Chinese students who are currently studying at US schools. So today we're going to dive into the impacts that these changes could have on the tech industry from the talent pipeline to future innovations. I'm Michael Calore, director of Consumer Tech and Culture here at WIRED. Lauren Goode: I'm Lauren Goode. I'm a senior correspondent at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's global editorial director. Michael Calore: I want to start us off by focusing on how the Trump administration has been handling student visas. Just last week, Secretary of State Marco Rubio announced that the administration would start to, "Aggressively" revoke visas for Chinese students. The State Department said it would focus on students from critical fields and those with ties to the Chinese Communist Party, but also that it would just generally enhance the scrutiny across the board. The vagueness of these guidelines has sent students, parents and universities into an emotional tailspin. What do we make of these latest developments? Lauren Goode: So there were actually two directives that went out last week and I'm sure we're going to hear more, but I think they're both worth noting. The first was that a directive was sent to US embassies around the world telling them to pause any new interviews for student and visitor visas, and that included the F, M and J visas, until further notice. And this whole idea was that it was in preparation for an expansion of social media screening and vetting. So basically the State Department is going to be looking much more closely at students' online activity, social media activity, and consider that as a part of their interview process when they're applying for a visa to the US. That was already a part of the application process, but now it's just going to be expanded. We don't really know what that means. The other was the revoking of visas for Chinese students as you mentioned, Mike. And really I think what this does is it adds another tool to this current Cold War of sorts that we're having with China, whether it's with the tariffs or whether it's measures like these, it's clear that the current administration wants to have the upper hand. And what we've reported at WIRED is that if this continues and the courts allow it, this would all have a significant effect on higher education because roughly a quarter of the international student population in the US is from China. And also, this is something I think a lot of people don't realize, I personally didn't realize until I started doing more research into this, international students often pay full tuition or close to it when they come here into the United States for school, which makes it an economic lifeline for a lot of these universities and also in some ways helps offset the costs for domestic students, US students who are getting scholarships or getting partial reduction in tuition and that sort of thing. I do think in general it's dangerous territory to start targeting students under a specific nationality for these alleged national security reasons. There are going to be questions about how effective it is longterm, but also how this could potentially weaken the US technology sector in the longterm. Katie Drummond: Yeah. And I think, Lauren, you're right to point out these two directives and I think that both got a fair bit of press attention, but I was surprised that the first announcement, this idea that we are going to be doing enhanced social media screening and vetting of international students and people applying for visas to come to the United States, the fact that that was not an international outrage when that was announced is very telling to me in terms of how much is happening in the news in the United States every single day because that is a very chilling announcement to be coming from the Secretary of State in this country. It is a massive free speech issue and really speaks I think to what will be an ongoing theme for WIRED and unfortunately already is, which is just the techno-authoritarian world, country that we now live in where these tools are essentially being weaponized to surveil and monitor not only US citizens, but people who proactively want to live and work and study here, that if you dare have an opinion that is contrary to the opinion of the Trump administration, that you could potentially have your visa revoked or not even be able to qualify for a visa. I think it's also important to note that everything that Lauren just spelled out and that we're talking about is part of this much larger conflict that's been unfolding between the Trump administration and higher education. So you have this Ivy League battle playing out between Trump and Columbia, Trump and Harvard. A lot of that obviously having to do with free speech issues and the Trump administration, again, essentially looking for institutions of higher education to adopt their viewpoint as opposed to being places where a plurality of points of view can be discussed and debated and held. There was already an attempt made to block Harvard from enrolling international students. A federal judge has blocked that for now, but we will have to see where it nets out. And I think regardless of where that one legal decision nets out there is, for so many reasons, this chilling effect where the United States is all of a sudden no longer a desirable destination for students, both at an undergraduate level and a graduate level. You have not only the Trump administration basically going to war with the best colleges in the country, you have them going to war with the actual student visa process, and then you have them going to war with research and science and even blocking already billions of dollars of research funding that is earmarked ostensibly for these institutions and now means that these institutions are much less attractive destinations. So it's not like, oh, a judge reverses a couple of decisions or one decision or blocks one thing from happening and all of a sudden we're in the clear again, this is already very clearly becoming a systemic and longterm crisis for the United States. Michael Calore: And this choking off of talent coming into research institutions and into jobs in the United States is also happening at a moment when China and the US are currently involved in an AI arms race. In January, the Chinese AI company DeepSeek showed off a reasoning model that is demonstrably and seemingly just as powerful as ChatGPT, but was developed for a fraction of the cost. So the US definitely needs to keep bringing in top AI talent, but how are these restrictions on student visas going to potentially shape the growth of the AI industry in the US? Lauren Goode: Yeah, this is something that when the news started to trickle out last week, we at WIRED were thinking, "Okay, this is really in our wheelhouse." We cover AI so closely, we have for years, and automatically the question is what does this mean for the AI race? We ended up reporting a story last week, it was myself, a few other WIRED folks, Kate, Louise, and Will, and some of the sources that we spoke to were pointing out the contradiction that exists here in the White House saying that AI is one of its top priorities and then trying to send the people who are doing this kind of research, this critical research for us here in the United States, home back to their home countries, or not letting them into the first place. And it's some US colleges, I would say probably a fair number of them, international students do make up the majority of doctoral students in departments like computer science. One of our colleagues, Kate Knibbs, talked to someone at the University of Chicago who said that foreign nationals accounted for 57% of newly enrolled computer science Ph.D. students last year. We know that immigrants have founded or co-founded nearly two thirds of the top AI companies in the United States. That's according to a 2023 analysis by the National Foundation for American Policy. And this is something that's been going on for a long time. I had this interesting conversation with a well-known economist last week. His name is William Lazonick. I was asking him his thoughts on this crackdown on student visas, and he made an important observation, which is that foreign students pursuing those STEM careers have actually been critical to the very existence of graduate programs in those fields. And some of this is cultural. Back in the 1980s, there was this big shift that was happening in the US around money basically. It was the era of Reaganomics and great is good, and American students were gravitating towards careers in finance. At the same time, Lazonick said, there were significant advancements happening in microelectronics and computing and biopharmaceuticals, and that opened the window for foreign students to say, "We're going to study stem." So what we are potentially on the brink of right now by thwarting or revoking these visas for foreigners could literally affect the outcome of American technology and science development for the next several decades. Katie Drummond: And particularly at a moment where, as you said, we're in this Cold War with China, we're in this AI arms race. You hear it from the administration, you read about it in WIRED, you hear about it from Sam Altman, other leaders of the AI industry, this like, "We must beat China. We must beat China." And then stuff like this happens and you feel like, "Let's just hand it to them. Let's just give it to them." Because we are basically doing that by disincentivizing not only Chinese students, but just brilliant people from all around the world, from coming here, bringing their intellect here, bringing their ideas here. We're basically telling them, "Go somewhere else. Maybe go to China." And something I did find fascinating in that reporting, Lauren, was that the vast majority of PhD students from China and India actually typically intend to stay in the US after they graduate. While the majority of people from other countries, places like Switzerland and Canada, report actually planning to leave, maybe they want to go back to their home country, maybe they want to go somewhere else, but it's rejecting the people who are most committed to staying here and to contributing to new technology in the United States is a certain kind of choice. And so other countries are already trying to take advantage of that. Hong Kong is already trying to attract Harvard students. The UK is setting up scholarships. There's a lot going on outside the United States in terms of basically trying to make the brain drain happen for us. Our loss is all of their gain. But when you put it in the context of this AI race and the US and China of it all, it feels like what we are doing is distinctly disadvantageous for us in this moment. Unless you both disagree and think I'm missing something. Lauren Goode: No, we always say on this podcast, it would be nice if we vehemently disagreed with each other because it would create tension. But I think in this case, we are all aligned on this. Michael Calore: Yeah. This scrutiny over foreign nationals, it doesn't just end at academia, of course. It also extends into the workforce here in the US and work visas. Lauren, you recently reported on how the process to obtain an H-1B visa has become more difficult recently. Can you tell us a little bit about what H-1B visas are and why they matter so much to the tech industry in particular? Lauren Goode: Sure, yeah. So H-1B visas are work visas that are granted for specialty occupations. They're typically valid for three years. They can be extended in some cases. This type of visa was first introduced in 1990 as part of a broader immigration act. And the idea is that it's supposed to help employers hire people with specialty skills that they might not otherwise get from the talent pool that already exists in the US. And the H-1B is a bit of a controversial visa. Even just saying, so you can hire people outside of the US because there are people who don't have that skillset here, naturally prompts the question for some people, "Wait, why are we not educating and training people in the US to have those jobs?" But basically what I was starting to hear from immigration attorneys who I was speaking to is that the requests for evidence, RFEs, had shot up since Trump took office in January of this year. Typically, when a person is applying or petitioning for an H-1B, their lawyer submits a bunch of paperwork on their behalf and that typically will include resumes, awards, letters of prestige, letters of recommendation from colleagues and friends and that sort of thing. You basically have to put together this packet to prove that you're worthy of this specialty visa. And then sometimes it would get bounced back and USCIS would ask for more requests for evidence. In this case, a lot of visa applications are being sent back. There are a lot more RFEs or requests for evidence for applicants. And that's something that four different immigration attorneys I spoke to said they're seeing happening. It's also not just happening across H-1B. There's another type of visa called the O-1 Extraordinary Ability visas. Once again, this is a specialty visa. A lot of tech entrepreneurs, engineers, and founders alike will come here under the O-1 visa and folks in that world are starting to say that they're getting pushback on their applications as well. All of this, it's instilling fear amongst some entrepreneurs and tech workers in the Valley, and it's creating a climate of uncertainty where people who seemed so committed and excited to come here and build their companies here and contribute to the technological environment here are now rethinking that because of what's going on with visa applications. Katie Drummond: Ugh. That is so bleak. 66% of people working in tech in Silicon Valley are born outside of the US. That is just an astonishing number to think about that being at risk. Lauren Goode: Yep. We're talking about the rank and file in a sense, but also just look at some of the CEOs- Katie Drummond: Yeah, look at the leadership. Lauren Goode: Of the companies we're talking about. Sundar Pichai and Satya Nadella, and I think the most... Should we talk about the most obvious one? Katie Drummond: I was going to say, just look at Elon Musk. Lauren Goode: Yes. Katie Drummond: What an international success story he is. Lauren Goode: Yes. Katie Drummond: What a success he has been for the United States of America. I will say, the H-1B visa program is not perfect. It's certainly been criticized for not being a fair system or a fair lottery, but despite the fact that this is an imperfect system, none of this actually feels like an approach to fix any of these problems or challenges, it's more just creating extra adversity and uncertainty around a process that's already very lengthy and very expensive. Michael Calore: So these challenges to the visa application have ramped up recently, but we're already seeing the effects of this, right? Lauren Goode: Yeah, this is something that's harder to quantify right now because these visa policies are just getting put in place. Everything's just changing. But I think we can qualify it by saying that the folks that we're talking to in Silicon Valley who are either here on a visa or they were hoping to stay on an extended visa or they were thinking of maybe coming here and we're working with attorneys to get that process started are now just reconsidering everything. You're already throwing yourself into a pretty uncertain world when you decide to launch a startup. You're choosing hard mode for yourself when you do that. So now throwing this uncertainty into the mix and thinking like, "Am I actually still going to be able to be here in three years if that's how long it takes me to actually make a product or build up a profitable business or raise my next funding round or something?" And if you can't see beyond that, I don't see how you realistically say like, "Oh, the US seems like a good bet right now." Katie Drummond: It just underscores how systemic and long-lasting this is going to be. Even if this were six months of bad federal policy and somehow the administration wakes up overnight and flips a switch and we see a lot of this pressure and additional scrutiny and adversity around immigration, around H-1B visas ease, there has already been so much damage done. We are going to feel this in this country for such a long time. Michael Calore: One of the thing about immigration policy that we have to talk about is something that our colleague David Gilbert has reported on for WIRED, and that is, as part of a reorganization of the State Department, the Trump administration is creating an office of remigration. And in very simple terms, remigration is an immigration policy embraced by extremists that calls for the removal of migrants including non-assimilated citizens. What do we make of this? Katie Drummond: So I talked a little bit earlier about being surprised that Marco Rubio announcing that enhanced social media scrutiny. I was surprised that that wasn't more of an outrage, that didn't get more coverage. This is even more extreme in that context, and it is a truly shocking development in this administration's war on anyone who is not a white American. That is basically what this is. I was shocked when I read this story last week and realized that this should be front page news for every news organization in the United States, and somehow it just wasn't. Lauren Goode: So the whole idea behind this is that they want to create a white ethnostate in this part of the world. Katie Drummond: That is our understanding of it, yeah. There is a long history to the idea of remigration and it really comes together through the lens of mega, it was present in the administration's first term as well. You had the Muslim ban, you had this idea of building a border wall, and I think what's so different this time from 2016, there's a lot that's different this time, I think big picture as we have seen, what's different is that this time the administration really means business. They're buttoned up, they're here to get the job done. And so it's the speed and the intensity at which these ideas, this very racist idea of remigration is going from just being something that's done in a scattershot way that is now showing up as a tactical specific policy proposal that is being released in official government documents. It's just a very different kind of approach and it feels much more real. It is much more real. And it's happening so quickly and amid I think so much other news that people are just not seeing that it's happening, and that's really scary. Lauren Goode: And what happens too I think is that there are all different kinds of immigration policies we're talking about here and if you're not paying close attention you might conflate them. There's a difference between the asylum process being shut down and the Aliens Enemy Act being overhauled with what may be going on with student and foreign visitor visas, Extraordinary Ability visas, which is different from what's being proposed with this remigration document. And a lot of it is happening under the guise of, "This is better for national security." There are of course going to be some instances in which that is true. For example, Stanford Review reported, I think it was a few weeks ago now, that they'd become aware of Chinese nationals actually trying to spy on Stanford University and its students. They'd purported to be other students. This sort of thing does happen, there are nations that are our adversaries that want to get information from the United States and wield it in nefarious ways, but for the most part, the Trump administration is putting immigrants in this giant bucket and creating this world in which they're all a threat to the United States. And that is absolutely not the case. Michael Calore: Yeah, these policies are going to obviously shape the culture of this country and they're going to shape the business that is done in this country. But of course, they are absolutely going to shape the technology industry. So let's take a break and when we come right back, we'll talk about the effects that these policies will have on tech. Welcome back to Uncanny Valley . We've been talking about the Trump administration's immigration policies and how they could shape the future of tech development in the us, and I'm curious to know how tech companies and workers have been reacting to these measures so far. Lauren Goode: I would say the number one thing I've heard directly from folks is that they are scaling back on their travel to conferences, whether they're academics or tech workers. And that may have a little bit more to do with what has been going on in some intermittent cases at the border, of people getting detained at the border. But also people are thinking about the status of their visa right now and whether they're an American citizen or they're here on a visa. Tech conferences and academic conferences are just a part of this world. Katie and I were just at one in Vancouver. And so if you have concerns about being let back into the United States after traveling, you may decline to go to one. And the same goes for universities. I think Brown University urged its international staff and students to postpone any plans to travel outside of the US out of an abundance of caution. Katie Drummond: It's interesting to think about the flip side of that because for most of the tech industry and the human beings who work in that industry, this is a very scary thing. It's affecting how they do their jobs, it's affecting whether or not they travel. And then you have the flip side of it, which is where there are certain parts of the tech industry who are really benefiting from these new policies. And I think Palantir is probably the best example of that. So Palantir is the brainchild of Peter Thiel, obviously a mega donor to the GOP party. And Palantir is really making it rain with the Trump administration, and they are benefiting tremendously from these policies and from DOGE efforts and administration efforts to centralize and unify data about American citizens and about immigrants to the administration. God knows what you could use all of that information for once it's centralized. Palantir recently won a $30 million no-bid contract to build ImmigrationOS, which essentially provides real-time data about the whereabouts of migrants and about deportations. Palantir obviously has worked with the US government for a very long time. They've had a contract with ICE since 2011, so that's almost 15 years ago. But we are really seeing the surveillance state that Palantir helps support grow exponentially and grow very quickly as a result of the administration's aims around immigration for one thing, but also just their aims to basically stand up and run an authoritarian state that would impact not only immigrants but US citizens as well. Michael Calore: So some tech companies are obviously seeing a paycheck opportunity in these immigration policies, but we can't say that the tech industry is operating as any kind of block, like they're not lockstep ideologically aligned with the immigration policies. And a lot of key tech leaders have been outspoken about the fact that they're not too happy with these policies, right? Lauren Goode: Yeah. It's honestly a little bit confusing. Someone like Elon Musk has in the past been in support of the H-1B. He employs more than 1,000 people on that type of visa. He even used it himself in his early years in the US, and he has in the past tweeted in support of immigrants being in Silicon Valley and contributing to the economy here. More recently though, he has called for a reform on it, and he's not alone in that. Same with Marc Andreessen, obviously one of the most vocal people, influential people in Silicon Valley. Surprisingly, they've got some interesting bedfellows. The Democrat Ro Khanna of California, Vermont's Bernie Sanders, they're also calling for a reform of the H-1B program. It goes back to what Katie was saying earlier, that there have been some critiques of H-1B. There's been a lot of backlash to the program, and it's hard to know sometimes whether it's coming from this kind of vitriolic or potentially racist place around how people feel about immigrants versus, "No, I'm actually in support of this because it's good for the US economy and the tech industry, but the process is broken." Katie Drummond: To me right now what we're looking at in the year 2025 is just part of this larger trend of tech leaders staying silent or muting their criticism or maybe posting something on X, but largely staying silent when it comes to politics, when it comes to political issues, at least publicly. We don't know what's happening behind the scenes, what kinds of lobbying efforts are going into trying to sway the administration one way or another when it comes to H-1B visas, when it comes to the importance of brilliant people from around the world being able to study and work in the United States and in the tech industry. But publicly for sure, we are not seeing that really robust resistance on the part of the tech industry. And that is certainly strategic because these guys know that this time the administration means business, they need to play ball, they need to work with this administration. And so we can only hope that behind the scenes there are more vigorous discussions happening than what we're seeing play out publicly. Michael Calore: It's distressing to me that the disconnect is so loud here because we really have to underscore how important of a positive role immigration has played in the growth of the tech industry. And in Silicon Valley in particular, like Lauren you were talking about earlier, some of the largest companies like Google and Microsoft have all had either founders or co-founders or CEOs who are first or second-generation immigrants. And if you look at a list right now of the country's current startups that are worth more than a billion dollars, more than half of them have an immigrant founder. Yeah. So the longterm stakes of keeping talented researchers and engineers and businesspeople out of the country seem deeply, deeply consequential. Lauren Goode: It's also just not a zero-sum game. If the tech industry continues to grow, presumably there would be enough room for having high-skilled American workers and high-skilled foreign nationals working together. Michael Calore: As it always has been. Okay, let's take another break and we'll come right back with recommendations. Thank you both for a great conversation. We are going to shift gears and talk about something completely different, which is our own personal desires and loves. We're going to do recommendations. Who wants to go first? Katie Drummond: My recommendations. It's been a busy time, so I feel like I'm a little bit limited on hobby activities, but a book I just finished that I do recommend, Barry Diller's memoir. If you're not familiar with Barry Diller, I believe he is now the chairman of IAC. But a long-time executive, invented the modern-day Hollywood approach to movie-making. It was great, so I highly recommend that. But my other recommendation is that last night I was thinking about what to have for dinner, and I made an omelet, and I haven't had an omelet in a while. The omelet had a red pepper, it had spinach, and it had shredded cheese, and it was just a really nice reminder if you're thinking about what to have for dinner tonight, a nice omelet, some toast with french butter, a can of seltzer, you might just be all set. That and a book. Michael Calore: Lauren, what is your recommendation? Lauren Goode: My recommendation is after you make your breakfast for dinner, you should check out the Brazilian film I'm Still Here. When I was flying home from Vancouver last week I started watching it on the plane and did not finish it. It was one of those things where I went home, unpacked, and then immediately bought the movie because I was like, "I need to finish watching it." Katie Drummond: Wow. Lauren Goode: And I loved it so much that I knew I wanted to own it. It's beautiful. It's beautifully done. It's based on a true story of a Brazilian congressman who is abducted during the military dictatorship. In Brazil that was at its peak in 1970, 1971. And really it's about his family too. It's about his wife, who's this incredibly strong woman in character, and their five children. And because it's the 1970s, the world is just different. Technology is limited, they have a family camcorder and that's really it. And the kids are just running around in their swimsuits all day long and things just feel simpler, but also complicated. And there are these scenes in the beginning where people are basically being rounded up by the military and you hear families having these conversations of, "Should we stay or should we go?" It's chilling, but it's a beautifully done film and so I highly recommend I'm Still Here. All right, what's your recommendation? Michael Calore: I'm here to tell the people to watch Mountainhead. This is a fiction film that feels closer. Lauren Goode: Just when I thought we were getting away from the tech bros. Michael Calore: It's a fiction film from Jesse Armstrong who is the creator of Succession. This is a movie that he did for HBO. We're just calling it HBO. Everybody deal with it. It's a bro fest. It's about four tech founders who gather at the Mountain retreat for a social weekend to catch up. There's a strict no deals policy, but of course that policy goes by the wayside as soon as things start happening. The four principal actors are Steve Carell, Jason Schwartzman, Cory Michael Smith, and Ramy Youssef. And if you liked the witty back and forth and the weird absurdist drama in Succession, there's plenty of that here. It's also very much of the moment because the backstory that happens during the film is that the world is embroiled in a bunch of political chaos because of AI DeepFakes on social media that are very inflammatory politically. Lauren Goode: Great. So also based on a true story is what you're saying. Michael Calore: Yeah. Katie Drummond: I do want to watch that. I would like to watch it. I will watch it. Michael Calore: It's not exactly a good time, but it is a rewarding time. Lauren Goode: I also will watch Mountainhead, but I'm actually wondering, and Katie, while we have you on the podcast, if I can just ask you, does that count as work? Because I interview those- Katie Drummond: No. Lauren Goode: Bros all the time, and so I can just take two hours during the day and watch that, right? It's work. Katie Drummond: Abso-fucking-lutely not. Lauren Goode: All right, we answered that. Katie Drummond: We sure did. Lauren Goode: Ooh. Michael Calore: Thanks for listening to Uncanny Valley . If you like what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, write to us at uncannyvalley@ Today's show is produced by Adriana Tapia and Kyana Moghadam. Amar Lal mixed this episode. Jake Lummus was our New York Studio engineer. Matt Giles fact-checked this episode. Jordan Bell is our executive producer. Katie Drummond is WIRED's global editorial director. And Chris Bannon is the head of Global Audio.

Let's Talk About ChatGPT and Cheating in the Classroom
Let's Talk About ChatGPT and Cheating in the Classroom

WIRED

time23-05-2025

  • Entertainment
  • WIRED

Let's Talk About ChatGPT and Cheating in the Classroom

Photo-Illustration: WIRED Staff/Gety Images All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. There's been a lot of talk about how AI tools like ChatGPT are changing education. Students are using AI to do research, write papers, and get better grades. So today on the show, we debate whether using AI in school is actually cheating. Plus, we dive into how students and teachers are using these tools, and we ask what place AI should have in the future of learning. You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: Hey, this is Mike. Before we start, I want to take the chance to remind you that we want to hear from you. Do you have a tech-related question that's been on your mind or just a topic that you wish we talk about on the show? If so, you can write to us at uncannyvalley@ and if you listen to and enjoy our episodes, please rate it and leave a review on your podcast app of choice. It really helps other people find us. How's everybody doing? How you feeling this week? Katie Drummond: I'll tell you how I'm feeling. It's Katie here. My vibe levels are up. I'm feeling really good. I was at Columbia University earlier this week with five of our fantastic editors and reporters at WIRED because we were honored at the Columbia Journalism School this week for our politics reporting. And so we got dressed up, I gave a speech and it was so wonderful to have a minute to sit back and take a breath and think about all of the journalism we've done in the last several months and celebrate that. And it was also really, really cool to just see and talk to journalists who were graduating from journalism school and feel their energy and their excitement and their drive to do this work. Because I think, as you guys know, and you probably agree, we're all quite tired. Lauren, how are you? Lauren Goode: When you said, "Because we're tired." I wasn't sure if you meant we're just tired in this moment or we are existentially tired because I am a little tired in this moment, but I am not existentially tired. I'm here for the fight, Katie. Katie Drummond: Oh, I'm so glad to hear that. Lauren Goode: Yeah. Katie Drummond: Yeah, I'm tired in this moment. I just think it's so nice to spend some time with a couple hundred people who are new to this and just so excited to get down to business. It was very cool. Michael Calore: How much ChatGPT use is there at Columbia University in the journalism department, do we think? Lauren Goode: Good question, Mike. Katie Drummond: I really hope very little. Michael Calore: Me too. For the sake of us all. This is WIRED's Uncanny Valley , a show about the people power and influence of Silicon Valley, and today we are talking about how AI tools like ChatGPT are changing education from middle school to graduate school. More and more students are using generative chatbot tools to gather information, finish assignments faster and get better grades, and sometimes just write things for them. Just this month, there has been a ton of reporting and discourse on this trend, and some of it has been fairly optimistic, but a lot of it has also been critical as one user on X put it, "The kids are cooked." Lauren Goode: The kids are all right. Katie Drummond: Which X user was it? I can think of a few. I'm just curious. We don't actually know. Michael Calore: So on this episode, we're going to dive into how students are using ChatGPT, how professors are using it, whether we think this trend is, in fact, cheating when the students use it, and what AI's place could be in the future of learning. I'm Michael Calore, director of consumer tech and culture here at WIRED. Lauren Goode: I'm Lauren Goode. I'm a senior correspondent at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's global editorial director. Michael Calore: So before we dive into what has been happening with AI and students potentially using ChatGPT to cheat in their coursework, I want to have all of our cards on the table. Did either of you cheat in high school or in college? And if so, how? Katie Drummond: I feel like I should go first here because I'm the boss and I want to set Lauren up for success in her answer. I did not cheat in college. I was a very serious person in college. I was getting an undergraduate degree in philosophy, which felt like a very serious thing to be doing at the time. So I was totally above board. And also, as I was thinking about this earlier, this was in the early 2000s and it wasn't, I don't think, or wouldn't have been particularly easy to cheat at philosophy back then, whereas interestingly, it would be pretty easy to cheat at philosophy now. You're reading a lot. You're writing a lot of essays. It's hard to imagine how I would've effectively cheated, but I didn't cheat. I did cheat in high school though. Everybody cheated all the time. I'm not saying I cheated all the time. I'm not going to answer that question, but I did cheat. I specifically remember we had graphing calculators and we would program equations and answers into the calculators using special code so that teachers, if they went through our calculators, they wouldn't be able to tell that it was cheats. But we went to pretty great lengths to cheat on math exams, which is so stupid because I would've done great on the math exam regardless, but there was just something about being able to get away with it. Lauren Goode: Do you feel like a weight has been lifted from you now that you have confessed? Katie Drummond: No, I don't care. Look, I think that most students, at least in middle school and high school, dabble with cheating, and so I have no shame. What are they going to do? Strip me of my high school diploma. Good luck. Lauren Goode: Yeah, it's kind of a rite of passage. Katie Drummond: Exactly. Lauren Goode: I was very similar to Katie in that I did not cheat in college. In high school though, I remember reading Cliff's Notes for some book assignments. My best friend and I also did some light cheating in high school because the first initial of our last names wasn't that far apart, and it was a small school as well, so she was often sitting in front of me and I was directly behind her. And we had a tapping scheme where we'd tap our pencils during Scantron tests. Katie Drummond: Wow. Michael Calore: Oh, like sending secret messages to each other. Lauren Goode: Yeah, yeah. So if she was on question 13, she would sort of slide her Scantron to the side of the desk and so that you could see which number, which question number 13, and then the person who had the answer would tap their pencil a corresponding number of times to be like, answer A, answer B, answer C. Anyway, I don't want to implicate her. Totally. She's an adult now with a career and two grown children, and I'm not sure if the statute limitations has expired on this grand felony from Notre Dame Catholic High School. So maybe we can scrap that from the record. Thank you very much. Mike, did you cheat? Michael Calore: No, I was a total goody-goody, like super-duper do everything by the book Eagle Scout kind of kid. Didn't cheat in high school. I did encounter a course in college that I had a really hard time keeping up with. It was the 19th century British novel, and the reading list was absolutely brutal. It was one super long, boring book every week. And I mean, there was some good stuff in there, like Jane Eyre and Frankenstein. And then there were absolutely terrible books in there, like Barchester Towers and The Mayor of Casterbridge. So I learned the art of the shortcut. I would zoom in on one chapter and I would read the Cliff's Notes, and then I would read that chapter and I would be able to talk about that chapter in depth on a test. Katie Drummond: Oh, that's very smart. That's smart. But not cheating. Michael Calore: Not necessarily cheating. I don't consider Cliff's Notes to be cheating. I'm one of those people. Lauren Goode: Why not? Michael Calore: Well, because you're still actually doing the work and comprehending. And I think some of the examples that we're going to talk about don't even have that step in them. They just sort of skip over all the learning, Lauren Goode: Yeah, but you're not understanding the full context of where that author fits into a certain category of other writers. Katie Drummond: Lauren, I think that what you're trying to do right now is distract both us and our audience from your Scantron felony, when in fact, it seems like Mike is the most innocent party here. I just need to say. Lauren Goode: Fair enough. Michael Calore: At least I did the reading. All right, well we've all come clean. So thank you for all of that. And we can acknowledge that, of course, cheating is nothing new, but we're talking about it now. Because of the use of AI tools like ChatGPT by students and how it has exploded in recent years. It's become a topic of debate in both the tech and education spheres. So just to get a sense of the scale of how much students are using AI, one estimate by the Digital Education Council says that around 86% of students, globally, regularly use AI. During the first two years that ChatGPT was publicly available, monthly visits to ChatGPT steadily grew and then started to dip In June when school gets out. Katie Drummond: 86%. Michael Calore: 86%. So yeah, I've used AI in my school. Katie Drummond: That is an astonishing figure. Michael Calore: So the appeal of something like ChatGPT, if you've used it, you understand why it would be useful to students. The appeal of using it is pretty obvious. It can write, it can research, it can summarize, it can generate working code, but the central question remains. Is using ChatGPT in schoolwork cheating? Where do we draw the line here? Katie Drummond: So I don't think that there's a black and white answer, which is good for the length of this episode, but I think that that informs my overall view about AI and education, which is that this technology is here, you can't hide it, you can't make it go away. You can't prevent teenagers and young adults from accessing it. So you need to learn to live with it and evolve and set new rules and new guardrails. So in that context, I think there are a lot of uses of AI for students that I would not qualify as cheating. So getting back to the Cliff Notes debacle, I think using AI to summarize information, like say you're coming up with notes to help you study and you use AI to summarize information for you and come up with a study guide for you, I think that's a fantastic use of AI and that would actually just save you a lot of time and allow you to focus on the studying part instead of the transcription and all of that stuff. Or honestly to me, using it to compile research for you that you'll use to then write a paper, I think use cases like that are a natural evolution of technology and what it can help us do. I think for me, where AI becomes cheating is when you use AI to create a work product that was assigned and meant to come from you and now doesn't. But Lauren, I'm curious to hear what you think. Lauren Goode: Well, it would make for a really good podcast if I vehemently disagreed with you right now. I think we're pretty aligned on this. Earlier this week I happened to be at the Google I/O conference, which is their annual software conference, and it's a huge AI fest. It's an AI love fest. And so I had the opportunity to talk to a bunch of different executives and many of these conversations were off the record. But after we got through the round of like, "Okay, what's the latest thing you announced?" I just said, "How are you feeling about AI and education? What's your framework for thinking about this?" And one of the persons said, "Are you using it to replace the goal of the exercise?" And it's a blurry line, but it's, I think, a line to draw in terms of whether or not you're "cheating". So if you're going to ask that question, you first have to determine the goal and then you have to determine what the product is. The product of an education is not actually test scores or assignments. The product is, are you learning something from doing it? So if you're using AI to generate an output, it's understandable that you would say, "Does this output demonstrate cheating?" But the cheating actually happens during the generative part of generative AI. And once again, that's very fuzzy, but I think that if the goal of an assignment is not just turn this thing into your teacher's desk on Tuesday morning, goal of it is, did you learn something? And so if you're using AI to cheat through the learning part, which is like I think what we're going to be discussing, then yes, I guess that is cheating. Broadly, the use of these tools in education, just broadly speaking, doesn't scream cheating to me. Katie Drummond: I think that's a really interesting way of thinking about it actually. I like that a lot. Thank you person at Google. Michael Calore: Yeah. If the assignment is to write 600 words about the French Revolution, then that's obviously something that ChatGPT can do for you pretty easily. But if the assignment getting knowledge into your brain and then being able to relay it, then to prove that you've memorized it and internalized it and understand it, then I think there's a lot of things that ChatGPT and tools like it can do for you. Like you mentioned Katie, you can use it to summarize books, you can use it to help you with the research. One of the most ingenious uses that I've seen is people ask it to generate practice tests. They upload their whole textbook and they say, "I have a test on Friday on chapters four and six, can you generate five practice tests for me to take?" And then that helps them understand what sort of questions they would be getting and what kinds of things keep popping up in all of those practice tests, those things are probably the most important things to learn. So let me quickly share a real world example of AI cheating to see what you think about it. The most infamous case perhaps comes from a recent New York Magazine story about students using ChatGPT for their coursework. The story starts off with Chungin Roy Lee, a former Columbia student who created a generative AI app explicitly to cheat on his computer science schoolwork. He even ended up using it in job interviews with major tech companies. He scored an internship with Amazon after using his AI helper during the interview for the job. He declined to take that job, by the way. So that's pretty ingenious. He's coding an app. He's using generative AI to make an app to help him cheat on things and get jobs. Do you think that the "ingenuity" behind building something like this is cheating? Do we think that his creation of this AI tool carries any merit? Lauren Goode: I mean, it's so clearly cheating because the intent is to cheat. If we go back to that question of, are you using it to replace the goal of what you're trying to do? His goal is cheating. His goal is like, "Look how clever I am and then I'm cheating." Lee strikes me as the irritant in the room. What it's doing is bubbling to the surface, a lightning rod topic that is much bigger than this one specific app. Katie Drummond: Well, and he, in April of this year, something I thought was interesting just in terms of he's the irritant, but how many complicit irritants does he have on his team? Lee and a business partner raised $5.3 million to launch an app that scans your computer screen, listens to the audio and then gives AI generated feedback and answers to questions in real time. And my question when I read that was, "Who are these investors? Who are these people?" The website for this company says, "We want to cheat on everything." And someone was like, "Yes, I am writing a check." Of course it's cheating. They say that it's cheating. I mean, I appreciate the creativity. It's always interesting to see what people dream up with regards to AI and what they can create. But using AI to ace a job interview in real time, not to practice for the job interview beforehand, but to, in real time, answer the interviewer's questions, like you're setting yourself up and your career up for failure. If you get the job, you do need to have some degree of competence to actually perform the job effectively. And then I think something else that I'm sure we'll talk about throughout this show is it's the erosion of skill. It's knowing how to think on your feet or answer tough questions or engage with a stranger, make small talk. There are all of these life skills that I worry we're losing when we start to use tools like the tools that Lee has developed. And so of course I think there are interesting potential use cases for AI like interview prep or practice is an interesting way to use that technology. So again, it's not about the fact that AI exists and that it's being used in the context of education or a job interview, but it's about how we're using it. And certainly in this case it's about the intent. Is someone who is developing these tools specifically with the intention of using them and marketing them for cheating? And I don't like that. I don't like a cheater, other than when I cheated in high school. Michael Calore: Well, we've been talking a lot about ChatGPT so far and for good reason because it's the most popular of the generative AI tools that students are using, but there are other AI tools that they can use to help with their coursework or even just do their schoolwork for them. What are some of the other ones that are out there? Lauren Goode: I think you can literally take any of these AI products that we write about every day in WIRED, whether it's ChatGPT, whether it's Anthropic's Claud, whether it's Google Gemini or the Perplexity AI search engine, Gamma for putting together fancy decks. All of these tools, they're also sort of highly specialized AI tools like Wolfram or MathGPT, which are both math focused models. And you can see folks talking about that on Reddit. Katie Drummond: Something interesting to me too, is that there are now also tools that basically make AI detectors pretty useless. So there are tools that can make AI generated writing sound more human and more natural. So you basically would have ChatGPT, write your paper, then run it through an additional platform to finesse the writing, which helps get that writing around any sort of AI detection software that your professor might be using. Some students have one LLM write a paper or an answer, and then they sort of run it through a few more to basically make sure that nothing can show up or nothing can be detected using AI detection software. Or students, I think too, are getting smarter about the prompts they use. So there was a great anecdote in this New York Magazine story about asking the LLM to make you sound like a college student who's kind of dumb, which is amazing. It's like maybe you don't need the A plus, maybe you're okay getting the C plus or the B minus. And so you set the expectations low, which reduces your risk, in theory, of getting caught cheating. Michael Calore: And you can train a chatbot to sound like you. Katie Drummond: Yes. Yeah. Michael Calore: To sound actually like you. One of the big innovations that's come up over the last year is a memory feature, especially if you have a paid subscription to a chatbot, you can upload all kinds of information to it in order to teach it about you. So you can give it papers, you can give it speeches, YouTube videos of you speaking so it understands the words that you'd like to use. It understands your voice as a human being. And then you can say, "Write this paper in my voice." And it will do that. It obviously won't be perfect, but it'll get a lot closer to sounding human. So I think we should also talk about some of the tools that are not necessarily straight chatbot tools that are AI tools. One of them is called Studdy, which is study with two Ds, which I'm sure the irony is not lost on any of us that they misspelled study in the name, but it's basically an AI tutor. You download the app and you take a picture of your homework and it acts like a tutor. It walks through the problem and helps you solve it, and it doesn't necessarily give you the answer, but it gives you all of the tools that you need in order to come up with the answer on your own. And it can give you very, very obvious hints as to what the answer could be. There's another tool out there called Chegg, C-H-E-G-G. Katie Drummond: These names are horrific, by the way. Just memo to Chegg and Studdy, you have some work to do. You both have some work to do. Lauren Goode: Chegg has been around for a while, right? Katie Drummond: It's a bad name. Lauren Goode: Yeah. Michael Calore: It has been, it's been very popular for a while. One of the reasons it's popular is the writing assistant. Basically you upload your paper and it checks it for grammar and syntax and it just helps you sound smarter. It also checks it for plagiarism, which is kind of amazing because if you're plagiarizing, it'll just help you not get caught plagiarizing and it can help you cite research. If you need to have a certain number of citations in a paper, oftentimes professors will say, "I want to see five sources cited." You just plug in URLs and it just generates citations for you. So it really makes that easy. Katie Drummond: I mean, I will say there are some parts of what you just described that I love. I love the idea of every student, no matter what school they go to, where in the country they live, what their socioeconomic circumstances are, that they would have access to one-on-one tutoring to help support them as they're doing their homework, wherever they're doing it, whatever kind of parental support they do or don't have. I think that that's incredible. I think the idea of making citations less of a pain in the ass is like, yeah, that sounds good. Not such a huge fan of helping you plagiarize, right? But it's again, it's like this dynamic with AI in education where not all good, not all bad. I've talked to educators and the impression I have gotten, and again, this is just anecdotal, but there is so much fear and resistance and reluctance and this feeling among faculty of being so overwhelmed by, "We have this massive problem, what are we going to do about it?" And I just think that too often people get caught up in the massive problem part of it and aren't thinking enough about the opportunities. Michael Calore: Of course, it's not just students who are using AI tools in the classroom, teachers are doing it too. In an article for The Free Press, an economics professor at George Mason University says that he uses the latest version of ChatGPT to give feedback on his PhD student's papers. So kudos to him. Also, The New York Times recently reported that in a national survey of more than 1800 higher education instructors last year, 18% of them described themselves as frequent users of generative AI tools. This year, that percentage has nearly doubled. How do we feel about professors using generative AI chatbots to grade their PhD students papers? Lauren Goode: So I have what may be a controversial opinion on this one, which is just give teachers all the tools. Broadly speaking, I don't think it is wrong for teachers to use the tools at their disposal, provided it aligns with what their school system or university policies say if it is going to make their lives easier and help them to teach better. So there was another story in The New York Times written by Kashmir Hill that was about a woman at Northeastern University who caught her professor using ChatGPT to prepare lecture notes because of some string of a prompt that he accidentally left in the output for the lecture notes. And she basically wanted her $8,000 back for that semester because she was thinking that, "I'm paying so much money to go here and my teacher is using ChatGPT." It currently costs $65,000 per year to go to Northeastern University in Boston. That's higher than the average for ranked private colleges in the US, but it's all still very expensive. So for that price, you're just hoping that your professors will saw off the top of your head and dump all the knowledge in that you need, and then you'll enter the workforce and nab that six-figure job right off the gate. But that's not how that works, and that is not your professor's fault. At the same time, we ask so much of teachers. At the university level, most are underpaid. It is increasingly difficult to get a tenure-track position. Below the university level, teachers are far outnumbered by students. They're dealing with burnout from the pandemic. They were dealing with burnout before then, and funding for public schools has been on the decline at the state level for years because fewer people are choosing to send their kids to public schools. Katie Drummond: I mean, I totally agree with you in terms of one group of people in this scenario are subject matter experts, and one group of people in this scenario are not. They are learning a subject. They are learning how to behave and how to succeed in the world. So I think it's a mistake to conflate or compare students using AI with teachers using AI. I think that what a lot of students, particularly at a university level, are looking for from a professor is that human-to-human interaction, human feedback, human contact. They want to have a back-and-forth dialogue with their educator when they're at that academic level. And so if I wrote a paper and my professor used AI to read the paper and then grade the paper, I would obviously be very upset to know that that feels like cheating at your job as a professor. And I think cheating the student out of that human-to-human interaction, that, ostensibly, they are paying for access to these professors, they're not paying for access to an LLM. Lauren Goode: Lesson plan, yeah. Katie Drummond: But for me, when I think about AI as an efficiency tool for educators, so should a professor use AI to translate a written syllabus into a deck that they can present to the classroom for students who are maybe better visual learners than they are written learners? Obviously. That's an amazing thing to be able to do. You could create podcast versions of your curriculum so that students who have that kind of aptitude can learn through their ears. You know what I mean? There are so many different things that professors can do to create more dynamic learning experiences for students, and also to save themselves a lot of time. And none of that offends me, all of that actually, I think is a very positive and productive development for educators. Michael Calore: Yeah, I mean essentially what you're talking about is people using AI tools to do their jobs in a way that's more efficient. Katie Drummond: Right, which is sort of what the whole promise of AI in theory, in a best-case scenario, that's what we're hoping for. Lauren Goode: What it's supposed to be. Yeah. Katie Drummond: Yeah. Michael Calore: Honestly, some of these use cases that we're talking about that we agree are acceptable, are much the same way that generative AI tools are being used in the corporate world. People are using AI tools to generate decks. They're using them to generate podcasts so that they can understand things that they need to do for their job. They're using them to write emails, take meeting notes, all kinds of things that are very similar to the way that professors are using it. I would like to ask one more question before we take a break, and I want to know if we can identify some of the factors or conditions that we think have contributed to this increasing reliance on AI tools by students and professors. They feel slightly different because the use cases are slightly different. Katie Drummond: I think that Lauren had a really good point about teachers being underpaid and overworked. So I think the desire for some support via technology and some efficiency in the context of educators, I think that that makes total sense as a factor. But when I think about this big picture, I don't really think that there is a specific factor or condition here other than just the evolution of technology. The sometimes slow, but often very fast march of technological progress. And students have always used new technology to learn differently, to accelerate their ability to do schoolwork and yes, to cheat. So now AI is out there in the world, it's been commercialized, it's readily available, and they're using it. Of course they are. So I will acknowledge though that AI is an exponential leap, I think, in terms of how disruptive it is for education compared to something like a graphing calculator or Google search. But I don't think there is necessarily some new and novel factor other than the fact that the technology exists and that these are students in this generation who were raised with smartphones and smart watches and readily accessible information in the palms of their hands. And so I think for them, AI just feels like a very natural next step. And I think that's part of the disconnect. Whereas for teachers in their thirties or forties or fifties or sixties, AI feels much less natural, and therefore the idea that their students are using this technology is a much more nefarious and overwhelming phenomenon. Michael Calore: That's a great point, and I think we can talk about that forward march of technology when we come back. But for now, let's take a break. Welcome back to Uncanny Valley . So let's take a step back for a second and talk about that slow march of technology and how various technologies have shaped the classroom in our lifetimes. So the calculator first made its appearance in the 1970s. Of course, critics were up in arms. They feared that students would no longer be able to do basic math without the assistance of a little computer on their desk. The same thing happened with the internet when it really flowered and came into being in the late 90s and early 2000s. So how is this emergence of generative AI any similar or different than the arrival of any of these other technologies? Lauren Goode: I think the calculator is a false equivalence. And let me tell you, there is nothing more fun than being at a tech conference where there's a bunch of Googler PhDs when you ask this question too. And they go, "But the calculator." Everyone's so excited about the calculator, which is great, an amazing piece of technology. But I think it's normal that when new technology comes out, our minds tend to reach for these previous examples that we now understand. It's the calculator, but a calculator is different. A standard calculator is deterministic. It gives you a true answer, one plus one equals two. The way that these AI models work is that they are not deterministic. They're probabilistic. The type of AI we're talking about is also generative or originative. It produces entirely new content. A calculator doesn't do that. So I think if you sort of broadly categorize them all as new tools that are changing the world, yes, absolutely tech is a tool, but I think that generative AI, I think it's in a different category from this. I was in college in the early 2000s when people were starting to use Google, and you're sort of retrieving entirely new sets of information in a way that's different from using a calculator, but different from using ChatGPT. And I think if you were to use that as the comparison, and the question is, is skipping all of those processes that you typically learn something doing, the critical part? Does that make sense? Katie Drummond: That makes sense. And this is so interesting because when I was thinking about this question and listening to your answer, I was thinking about it more in that way of thinking about the calculator, thinking about the advent of the internet and search, comparing them to AI. Where my brain went was what skills were lost with the advent of these new technologies and which of those was real and serious and maybe which one wasn't. And so when I think about the calculator, to me that felt like a more salient example vis-a-vis AI because the advent of the calculator, are we all dumber at doing math on paper because we can use calculators? Michael Calore: Yes. Katie Drummond: For sure. Lauren Goode: Totally, one hundred percent. Katie Drummond: For sure. You think I can multiply two or three numbers? Oh no, my friend, you are so wrong. I keep tabs on my weekly running mileage, and I will use a calculator to be like, seven plus eight plus 6.25 plus five. That's how I use my calculator. So has that skill atrophied as a result of this technology being available? 100%. When I think about search and the internet, I'm not saying there hasn't been atrophy of human skill there, but that to me felt more like a widening of the aperture in terms of our access to information. But it doesn't feel like this technological phenomenon where you are losing vital brain-based skills, the way a calculator feels that way. And to me, AI feels that way. It's almost like when something is programmed or programmable, that's also where I feel like you start to lose your edge. Now that we program phone numbers into our phones, we don't know any phone numbers by heart. I know my phone number, I know my husband's phone number. I don't know anyone else's phone number. Maybe Lauren, maybe you're right. It's this false equivalence where you can't draw any meaningful conclusion from any one new piece of technology. And AI again, I think is just exponentially on this different scale in terms of disruption. But are we all bad at math? Yes, we are. Michael Calore: Yeah. Lauren Goode: Well, I guess I wonder, and I do still maintain that it's kind of a false equivalence to the calculator, but there were some teachers, I'm sure we all had them, who would say, Fine, use your calculator, bring it to class." Or, "We know you're using it at home for your homework at night, but you have to show your work." What's the version of show your work when ChatGPT is writing an entire essay for you? Michael Calore: There isn't one. Katie Drummond: Yeah, I mean, I think some professors have had students submit chat logs with their LLMs to show how they use the LLM to generate a work product, but that starts from the foundational premise that ChatGPT or AI is integrated into that classroom. I think if you're just using it to generate the paper and lying about it, you're not showing your work. But I think some professors who maybe are more at the leading edge of how we're using this technology have tried to introduce AI in a way that then allows them to keep tabs on how students are actually interacting with it. Lauren Goode: Mike, what do you think? Do you it's like the calculator or Google or anything else you can think of? Michael Calore: Well, so I started college in 1992, and then while I was at college, the web browser came around and I graduated from college in 1996. So I saw the internet come into being while I was in the halls of academia. And I actually had professors who were lamenting the fact that when they were assigning us work, we were not going to the library and using the card catalog to look up the answers to the questions that we were being asked in the various texts that were available in the library. Because all of a sudden we basically had the library in a box in our dorm rooms and we could just do it there. I think that's fantastic. Katie Drummond: Yes. Michael Calore: I think having access at your fingertips to literally the knowledge of the world is an amazing thing. Of course, the professor who had that view also thought that the Beatles ruined rock and roll and loved debating us about it after class. But I do think that when we think about using ChatGPT and whether or not it's cheating, like yes, absolutely, it's cheating if you use it in the ways that we've defined, but it's not going anywhere. And when we talk about these things becoming more prevalent in schools, our immediate instinct is like, "Okay, well how do we stop it? How do we contain it? Maybe we should ban it." But it really is not going anywhere. So I feel like there may be a missed opportunity right now to actually have conversations about how we can make academia work better for students and faculty. How are we all sitting with this? Lauren Goode: I mean, banning it isn't going to work, right? Do we agree with that? Is the toothpaste out of the tube? Katie Drummond: Yes, I think- Lauren Goode: And you could be a school district and ban it and the kids are going to go, "Haha, Haha, Ha." Michael Calore: Yeah. Katie Drummond: I mean that's a ridiculous idea to even... Lauren Goode: Right. Katie Drummond: If you run a school district out there in the United States, don't even think about it. Lauren Goode: Right. And what's challenging about the AI detection tools that some people use, they're often wrong. So I think, I don't know, I think we all have to come to some kind of agreement around what cheating is and what the intent of an educational exercise is in order to define what this new era of cheating is. So a version of that conversation that has to happen for all these different levels of society to say, "What is acceptable here? What are we getting from this? What are we learning from this? Is this bettering my experience as a participant in society?" Katie Drummond: And I think ideally from there, it's sort of, "Okay, we have the guardrails. We all agree what cheating is in this context of AI." And then it's about how do we use this technology for good? How do we use it for the benefit of teachers and the benefit of students? What is the best way forward there? And there are some really interesting thinkers out there who are already talking about this and already doing this. So Chris Ostro is a professor at the University of Colorado at Boulder, and they recommend actually teaching incoming college students about AI literacy and AI ethics. So the idea being that when students come in for their first year of college that we need to actually teach them about how and where AI should be used and where it shouldn't. When you say it out loud, you're like, "That's a very reasonable and rational idea. Obviously we should be doing that." Because I think for some students too, they're not even aware of the fact that maybe this use of AI is cheating, but this use of AI is something that their professor thinks is above board and really productive. And then there are professors who are doing, I think, really interesting things with AI in the context of education in the classroom. So they'll have AI generate an essay or an argument, and then they will have groups of students evaluate that argument, basically deconstruct it and critique it. So that's interesting to me because I think that's working a lot of those same muscles. It's the critical thinking, the analysis, the communication skills, but it's doing it in a different way than asking students to go home and write a paper or go home and write a response to that argument. The idea being, "No, don't let them do it at home because if they go home, they'll cheat." It's an interesting evolution of, I think, Lauren, to the point that you've brought up repeatedly that I think is totally right is thinking about what is the goal here, and then given that AI is now standard practice among students, how do we get to the goal in a new way? Michael Calore: Yeah, and we have to figure out what we're going to do as a society with this problem because the stakes are really, really high. We are facing a possible future where there's going to be millions of people graduating from high school and college who are possibly functionally illiterate because they never learned how to string three words together. Katie Drummond: And I have a second grader, so if we could figure this out in the next 10 years, that would be much appreciated. Lauren Goode: So she's not using generative AI at this point? Katie Drummond: Well, no, she's not. Certainly not. She gets a homework packet and she loves to come home and sit down. I mean, she's a real nerd. I love her, but she loves to come home and sit down and do her homework with her pencil. But my husband is a real AI booster. We were playing Scrabble a couple of months ago, adult Scrabble with her. She's seven, Scrabble is for ages eight and up, and she was really frustrated because we were kicking her ass, and so he let her use ChatGPT on his computer and she could actually take a photo of the Scrabble board and share her letters. Like, "These are the letters that I have, what words can I make?" And I was like, "That's cheating." And then honestly, as we kept playing, it was cool because she was discovering all of these words that she had never heard of before and so she was learning how to pronounce them. She was asking us what they meant. My thinking about it softened as I watched her using it. But no, it's not something that is part of her day to day. She loves doing her homework and I want her to love doing her homework until high school when she'll start cheating like her mother. Michael Calore: This is actually a really good segue into the last thing that I want to talk about before we take another break, which is the things that we can do in order to make these tools more useful in the classroom. So thought exercise, if you ran a major university or if you're in the Department of Education before you lose your job, what would you be doing over your summer break coming up in order to get your institutions under your stead ready for the fall semester? Katie Drummond: I love this question. I have a roadmap. I'm ready. I love this idea of AI ethics, so I would be scouring my network, I would be hiring a professor to teach that entry level AI ethics class, and then I would be asking each of my department heads because every realm of education within a given college is very different. If you have someone who runs the math department, they need to think about AI very differently than whoever runs the English department. So I would be asking each of my department leads to write AI guidelines for their faculty and their teachers. You can tell I'm very excited about my roadmap. Michael Calore: Oh yes. Katie Drummond: I would then review all of those guidelines by department, sign off on them, and also make sure that they laddered up to a big picture, institutional point of view on AI. Because obviously it's important that everyone is marching to the beat of the same drum, that you don't have sort of wildly divergent points of view within one given institution. Lauren Goode: What do you think your high level policy on AI would be right now if you had to say? Katie Drummond: I think it would really be that so much of this is about communication between teachers and students, that teachers need to be very clear with students about what is and is not acceptable, what is cheating, what is not cheating, and then they need to design a curriculum that incorporates more, I would say, AI friendly assignments and work products into their education plan. Because again, what I keep coming back to is, you can't send a student home with an essay assignment anymore. Lauren Goode: No, you can't. Katie Drummond: You can't do that. So it comes down to, what are you to do instead? Lauren Goode: I like it. Katie Drummond: Thank you. What would you do? Lauren Goode: I would enroll at Drummond. Drummond, that actually sounds like a college. Where did you go to school? Drummond. Michael Calore: It does. Lauren Goode: Well, I was going to say something else, but Katie, now that you said you might be hiring an ethics professor, I think I'm going to apply for that job, and I have this idea for what I would do as an ethics professor teaching AI to students right now. On the first day of class, I would bring in a couple groups of students. Group A would have to write an essay there on the spot and group B presumably were doing it, but actually they weren't. They were just stealing group A's work and repurposing it as their own. And I haven't quite figured out all the mechanics of this yet, but basically I would use as an example for here's what it feels like when you use ChatGPT to generate an essay because you're stealing some unknown person's work, essentially cut up into bits and pieces and repurposing it as your own. Katie Drummond: Very intense, Lauren. Lauren Goode: I would start off the classroom fighting with each other, basically. Katie Drummond: Seriously? Michael Calore: It's good illustration. I would say that if I was running a university, I would create a disciplinary balance in the curriculum across all of the departments. You want to make sure that people have a good multi-disciplinary view of whatever it is that they're studying. So what I mean is that some percentage of your grade is based on an oral exam or a discussion group or a blue book essay, and some other percentage is based on research papers and tests and other kinds of traditional coursework. So, I think there has to be some part of your final grade that are things that you cannot use AI for. Learning how to communicate, how to work in teams, sitting in a circle and talking through diverse viewpoints in order to understand an issue or solve a problem from multiple different angles. This is how part of my college education worked, and in those courses where we did that, where one third of our grade was based on a discussion group, it was one class during the week was devoted to sitting around and talking. I learned so much in those classes, and not only about other people, but also about the material. The discussions that we had about the material were not places that my brain would've normally gone. So yeah, that's what I would do. I think that's the thing that we would be losing if we all just continued to type into chatbots all the time. There are brilliant minds out there that need to be unleashed, and the only way to unleash them is to not have them staring at a screen. Lauren Goode: Mike's solution is touch some grass. I'm here for it. Michael Calore: Sit in a circle, everybody. Okay, let's take one more break and then we'll come right back. Welcome back to Uncanny Valley . Thank you both for a great conversation about AI and school and cheating, and thank you for sharing your stories. Before we go, we have to do real quick recommendations. Lightning round. Lauren, what is your recommendation? Lauren Goode: Ooh. I recommended flowers last time, so... Katie Drummond: We are going from strength to strength here at Uncanny Valley . Lauren Goode: My recommendation for flowers has not changed for what it's worth. Hood River, Oregon. That's my recommendation. Michael Calore: That's your recommendation. Did you go there recently? Lauren Goode: Yeah, I did. I went to Hood River recently and I had a blast. It's right on the Columbia River. It's a beautiful area. I you are a Twilight fan, it turns out that the first Twilight movie, much of it was filmed right where we were. We happened to watch Twilight during that time just for kicks. Forgot how bad that movie was, but every time the River Valley showed up on screen, we shouted, "Gorge." Because we were in the gorge. I loved Hood River. It was lovely. Michael Calore: That's pretty good. Katie? Katie Drummond: My recommendation is very specific and very strange. It is a 2003 film called What a Girl Wants, starring Amanda Bynes and Colin Firth. Michael Calore: Wow. Katie Drummond: I watched this movie in high school, where I was cheating on my math exams. Sorry. For some reason, just the memory of me cheating on my high school math exams makes me laugh, and then I rewatched it with my daughter this weekend, and it's so bad and so ludicrous and just so fabulous. Colin Firth is a babe. Amanda Bynes is amazing, and I wish her the best. And it's a very fun, stupid movie if you want to just disconnect your brain and learn about the story of a seventeen-year-old girl who goes to the United Kingdom to meet the father she never knew. Michael Calore: Wow. Lauren Goode: Wow. Katie Drummond: Thank you. It's really good. Lauren Goode: I can't decide if you're saying it's good or it's terrible. Katie Drummond: It's both. You know what I mean? Lauren Goode: It's some combination of both. Katie Drummond: It's so bad. She falls in love with a bad boy with a motorcycle, but a heart of gold who also happens to sing in the band that plays in UK Parliament, so he just happens to be around all the time. He has spiky hair. Remember 2003? All the guys had gel, spiky hair. Lauren Goode: Yes, I still remember that. Early 2000s movies, boy, did they not age well. Katie Drummond: This one though, aged like a fine wine. Michael Calore: That's great. Katie Drummond: It's excellent. Lauren Goode: It's great. Katie Drummond: Mike, what do you recommend? Lauren Goode: Yeah. Michael Calore: Can I go the exact opposite? Katie Drummond: Please, someone. Yeah. Michael Calore: I'm going to go literary. Katie Drummond: Okay. Michael Calore: And I'm going to recommend a novel that I read recently that it just shook me to my core. It's by Elena Ferrante, and it is called The Days of Abandonment. It's a novel written in Italian, translated into English and many other languages by the great pseudonymous novelist, Elena Ferrante. And it is about a woman who wakes up one day and finds out that her husband is leaving her and she doesn't know why and she doesn't know where he's going or who he's going with, but he just disappears from her life and she goes through it. She accidentally locks herself in her apartment. She has two children that she is now all of a sudden trying to take care of, but somehow neglecting because she's- Katie Drummond: This is terrible. Michael Calore: But it's the way that it's written is really good. It is a really heavy book. It's rough, it's really rough subject matter wise, but the writing is just incredible, and it's not a long book, so you don't have to sit and suffer with her for a great deal of time. I won't spoil anything, but I will say that there is some resolution in it. It's not a straight trip down to hell. It is a, really, just lovely observation of how human beings process grief and how human beings deal with crises, and I really loved it. Katie Drummond: Wow. Michael Calore: I kind of want to read it again, even though it was difficult to get through the first time. Katie Drummond: Just a reminder to everyone, Mike was the one who didn't cheat in high school or college, which that totally tracks from the beginning of the episode to the end. Michael Calore: Thank you for the reminder. Katie Drummond: Yeah. Michael Calore: All right, well, thank you for those recommendations. Those were great, and thank you all for listening to Uncanny Valley . If you liked what you heard today, make sure to follow our show and to rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, write to us at uncannyvalley@ We're going to be taking a break next week, but we will be back the week after that. Today's show is produced by Adriana Tapia and Kiana Mogadam. Greg Obis mixed this episode. Jake Loomis was our New York studio engineer, Daniel Roman fact-checked this episode. Jordan Bell is our executive producer. Katie Drummond is WIRED's global editorial director, and Chris Bannon is the head of Global Audio.

Is Elon Musk Really Stepping Back From DOGE?
Is Elon Musk Really Stepping Back From DOGE?

WIRED

time17-05-2025

  • Business
  • WIRED

Is Elon Musk Really Stepping Back From DOGE?

Elon Musk is apparently turning his attention away from Washington and back to Tesla. On this episode of Uncanny Valley , the hosts unpack what Musk's pivot means for the future of DOGE. Elon Musk arrives for a town hall meeting wearing a cheesehead hat at the KI Convention Center on March 30 in Green Bay, Wisconsin. Photo-Illustration: WIRED Staff; Photograph:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Elon Musk says he's stepping back from his role with the so-called Department of Government Efficiency to turn his attention to his businesses—most urgently to Tesla, which has faced global sales slumps in recent months. In this episode, we discuss how our understanding of DOGE has evolved over the past five months and what we think will happen when Musk scales back. You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: Hey, this is Mike. Before we start, I want to take the chance to remind you that we want to hear from you. Do you have a tech related question that's been on your mind, or maybe you have a topic that you wish we talk about on the show? If so, you can write to us at uncannyvalley@ and if you listen to and enjoy our episodes, please rate it and leave your review on your podcast app of choice. It really helps other people find us. Hi folks, co-hosts. How's it going? Katie Drummond: Ugh. Michael Calore: That good? Katie Drummond: That was me, Katie. That was me speaking. No, it's going all right. It's been a stressful 90 minutes leading up to recording this podcast, but I'm okay. Michael Calore: Did you just fly through Newark? Katie Drummond: No, actually I didn't. Although I know that that is in your cards, in the near future. I actually rescheduled a flight to avoid Newark, so I'm now taking a red eye for no reason other than I don't want to fly into Newark Airport. Lauren Goode: Smart. Katie Drummond: Thank you. Michael Calore: I'm jealous. Lauren Goode: Mike, I'm sending you all of the good wishes. Michael Calore: Thank you. I hope to listen to this podcast on an airplane that took off on time and lands on time without incident on Thursday. Lauren Goode: I hope you return next week able to tape another podcast because you didn't get stuck somewhere. Michael Calore: I think metaphysically, we're all stuck somewhere right now, I think. Lauren Goode: Yeah, we're in the middle of some big transitions. That's probably the one thing that we have in common with Elon Musk. Katie Drummond: Touché. Michael Calore: Back in the first week of January, we put out an episode of this show that was all about DOGE, the so-called Department of Government Efficiency. I would say it was our very first DOGE episode, if I'm remembering correctly. And we talked about the key players, the goals of the group, and the ins and outs of government spending. A lot has happened since then. And now, Elon Musk, says that he's stepping back from his full-time role at DOGE. There are still many unanswered questions about where DOGE stands now, including if and when Elon's exit will happen, but we're wondering what actually has been accomplished during Musk's time with the DOGE Bros. So, today in the show, the latest on DOGE and what it may look like post-Elon. This is WIRED's Uncanny Valley , a show about the people, power, and influence of Silicon Valley. I'm Michael Calore, Director of Consumer Tech and Culture here at WIRED. Lauren Goode: I'm Lauren Goode, I'm a Senior Writer at WIRED. Katie Drummond: And I'm Katie Drummond, WIRED's Global Editorial Director. Michael Calore: So, I want to start by asking a question that we asked in our last deep dive on DOGE, because I think the answer may have changed since then. At this moment, just a few months into Trump's second term as President, May 2025, what exactly is DOGE? Lauren Goode: Well, I wish it was a figment of our imagination. Katie Drummond: Yes, I wish that it was a fever dream, but that is still the big question, incredibly enough. And I think at WIRED, we've actually been very careful when we characterize DOGE in our reporting, we often, or always, use the term, "so-called." The so-called Department of Government Efficiency, because it doesn't really actually exist. And as some WIRED reporters pointed out last month, I think it was Zoë and Kate, it's almost a metaphysical question at this point. And that was in relation to employees at the General Services Administration, despite the fact that there are at least half a dozen DOGE operatives on payroll at that administration, despite the fact that there is a section of that building that is for DOGE use only and is a secure facility within the GSA, that the acting head of the GSA actually said, in an all-hands, that there was no DOGE team working at the GSA. Which begs the question, well, who are these people then and who do they work for? I think in a more practical way, there are two DOGEs. There's US Digital Service, which was essentially hijacked and repurposed by the administration, now known as the US DOGE Service. Sure. And then there's a temporary organization within the US DOGE service, called, obviously, the US DOGE Service Temporary Organization. And that organization is ostensibly in charge of carrying out the DOGE agenda. So, I think all of this semantic BS aside, what is DOGE? Well, it is the brainchild of Elon Musk. It is something that the president got on board with very early, and DOGE is effectively a collection of typically young, I think almost always male, technologists who come from companies that Musk and Peter Thiel do run or have run. Despite what the acting head of GSA says, there is a DOGE, and it is made up of these dozens and dozens of technologists who are working inside all of these different agencies. That is what DOGE is, whether it's a real department or agency or not, that's what it is. And we have a pretty good sense now, in May, of what they're actually doing. Michael Calore: And it's important to note that they did make a number of hires, dozens and dozens of people who they hired to be a part of DOGE, who are now installed in various agencies around the federal government. Lauren Goode: And a lot more layoffs too. Michael Calore: Yeah. Well, we have been doing a lot of reporting on DOGE. As Katie, as you just mentioned, WIRED has been on top of the story ever since the beginning, because we know Elon and we know his playbook. So, what are some of the stories that WIRED has done over the last few months on DOGE that have just totally blown your mind? Katie Drummond: Wow. There are a lot. I think the reporting that we have done around what DOGE is doing using AI and using all of the data that they've been able to access to actually surveil immigrants, I think that that reporting is incredibly disturbing. I think it is beyond the worst fears of folks in late January, early February as DOGE's work was getting underway, the idea that this kind of thing could happen and that it could happen so quickly, it certainly was talked about. It was speculated in terms of what do you think they're going to do? What are they after? There were a lot of hypotheses at the time. I don't think anyone anticipated that we would see that kind of work happen so quickly and in such a dystopian way. And then, I think, it hasn't blown my mind, but I really like the coverage that we've done around how recruiting for DOGE happens. And we just published another story on this recently, I think it was a couple of weeks ago. It was in early May, from Caroline Haskins and Tori Elliot, that was about another round of recruiting that's happening for DOGE. And this recruiting always seems to happen in these Slack groups for alumni of various tech companies, this time it was Palantir, and this guy, this entrepreneur, went into the Slack room and basically said, "Hey, I'm looking for people who would be excited to design and deploy AI agents who could free up at least 70,000 full-time government workers over the next year." And in the way he phrased it, he was saying, "These agents could free up these 70,000 people for," quote, "higher impact work." Which begs the question, higher impact work in the private sector after you fire all of them? Exactly what is the plan? And that story was really interesting to me because of how, first of all, I think how the recruiting happens is really interesting. I think the fact that it's happening, they're specifically targeting alums from certain companies, that this is happening in Slack groups and message boards. I think that's interesting. But I thought that the way that message was received was fascinating, given that we're now in May. And so, people have seen DOGE play out over the last few months. We wrote, "Eight people reacted with clown face emojis, three reacted with a custom emoji of a man licking a boot. Two reacted with a custom emoji of Joaquin Phoenix giving a thumbs down in the movie Gladiator. And three reacted with a custom emoji with the word 'fascist.'" So, it was just interesting to me to note that alums of a company like Palantir are looking at that message, and at least some of them are saying, like, "Nah, I see what you're doing here. And this is not only not compelling to me as a recruitment effort, but actually fascist." Lauren Goode: Now, I should mention that I happen to have been on a short book leave at the start of this year— Katie Drummond: Good timing. Lauren Goode: When ... Great timing. Katie knows I came back, and I was lamenting to her via our Slack, like, "Katie, I'm literally never taking leave again because so much happened." And starting in late January, I started to see WIRED's incredible reporting, watching it from afar and seeing all this news come out about DOGE, and just was like, "What is happening?" And one of the things that stood out to me almost immediately was this juxtaposition of cuts to the federal workforce and also cuts to federal spending, like the $1 limit that was placed on federal employees credit cards— Michael Calore: Oh, gosh. Lauren Goode: And how much this limited their ability to do their job, like running out of toilet paper, running out of printer paper, not being able to just do office functions as a federal employee, juxtaposed with Trump's incredibly lavish candlelight dinners and the crypto scheme we talked about last week, and all of the ways in which it seems like there are members of this administration who are simply lining their pockets as they have dispatched DOGE to make all of these cuts. If you just step back from that, it's hard to see, at this point, how this benefits America. What has actually happened here? Michael Calore: I think probably my favorite story is one of our most recent ones about the Library of Congress, and how two gentlemen showed up to the Library of Congress and said, "Hi, we work here. You need to let us in." Capitol Police said, "No. Who are you? Can you identify yourselves?" And they showed him a note from DOGE saying that they worked there and that they should let them in. And the Capitol Police turned them away. And it turns out they did actually work there. They had a note from Daddy. Lauren Goode: Please never call him that again. Katie Drummond: Oh, boy. Michael Calore: So, back when we first started talking about DOGE, at the beginning of the year, it was actually two people. It was Elon Musk and Vivek Ramaswamy. I think a week after we published that episode, Vivek was out. Lauren Goode: Has anyone heard from Vivek? Katie Drummond: I don't think about him. I don't know him. I don't know that man. No. Isn't he running for governor? Lauren Goode: I was going to say he's running for governor of Ohio. Wasn't that the plan? I like how we're all Googling this. Katie Drummond: He's pivoted. Michael Calore: Well, it's important to think about who's running it now, because Elon says he's only going to be around one to two days a week. He says he will continue to do work for DOGE and for President Trump until the end of Trump's term, whatever year that may be. He's going to be scaling back. He's going to go on 20% time, basically. So, who are the people who are still there? Who are the people? Who are the names that we now need to know? Lauren Goode: I think AI agents are going to be running all of it. Katie Drummond: Well, obviously they're apparently replacing 70,000 federal workers with them within the year. Obviously, there are some very high-profile members of DOGE after just a few short months. There's Edward "Big Balls" Coristine, this 19-year-old appointed by Musk who owns LLC. I'm sure everyone is familiar with Big Balls at this point. There are plenty of other young inexperienced engineers working across these agencies, and then there are the adults in the room. There are people like Steve Davis, who is one of Musk's, really, right-hand men who works closely alongside him at a number of his companies, and has been working with him in the federal government. And we also, of course, know that they are still actively recruiting, again, largely from companies that Musk himself own. So, I think that the whole point of all of this is that, yes, Elon Musk is scaling back. So, let's say he scales back, let's say he decides to part ways with DOGE and the administration altogether. DOGE is already embedded in the federal government. He accomplished what he set out to do, in so far as we now have DOGE team members, DOGE operatives at dozens and dozens and dozens of federal agencies. They very clearly have their marching orders, they're carrying out work. So, at this point, you can't claw that all back, and that doesn't leave the federal government just because Elon Musk potentially leaves the government. The damage is done. I do think it's important to note here, and I know this will come up over and over because I'm going to keep bringing it up. Elon Musk at two days a week, is a lot of Elon Musk. 20% of Elon Musk's time going to the federal government, sure, he won't be in the weeds seven days a week, 24 hours a day, but that's a lot of Musk time. So, I do think it's important to be cautious, and I just say this to all of our listeners and to everyone out there, this idea that Musk is disappearing from the federal government or disappearing from DOGE, the administration might want you to think that that's what's happening. I suspect that that is not at all what's happening. That said, from all appearances, Elon Musk might be less involved in DOGE, but DOGE is going to keep on keeping on. Michael Calore: And while it's trucking, what is Elon going to be doing? What does he say? Lauren Goode: Yeah, what is he going to be doing? Katie, do you have a sense of how much of this is related to the fact that Tesla isn't doing so well right now? Katie Drummond: Well, I suspect that that's a big factor, but I think so much of the narrative externally, and even people at Condé Nast who have come up to me to be like, "Elon, he's out. Is it Tesla? Why is he leaving DOGE?" This is optics. This is narrative. His company is in the tubes, it is really struggling. They needed a way to change that story, and they needed a way to change that story very quickly. The best way that they could change that story was to say, "No, no, no, no, no. Don't worry. Elon Musk is not all in on DOGE and the federal government. He is going to be stepping back and he's going to be focusing on his other companies." Even just Trump saying that, Musk saying that, that being the narrative that plays out in the media is incredibly helpful for Musk, particularly in the context of Tesla, and just the board, and shareholders, and their confidence in his ability to bring this company back from the brink. So, do I think that he's pulling back and will be spending less time with DOGE? Yes. Do I think a lot of this was just smoke, and mirrors, and optics, and narrative and PR? Yes, it was incredibly well-timed right as Tesla was really, really, really in the tubes and getting a ton of bad press. Elon Musk makes this very convenient announcement, right? Lauren Goode: Mm-hmm. Right. And this is something that the venture capitalist and Musk's fellow South African, David Sacks, has said, "It's just what Musk does." He said he has these intense bursts where he focuses on something, gets the right people and the structure in place, feels like he understands something, and then he can delegate. And he's just reached that point with DOGE. He's in delegation mode. Katie Drummond: Yes, it seems like he has all the right people in place, and a structure that is so clear and transparent to the American people, that it's time for him to move on. Michael Calore: And I do think that he is going to have to figure out the Tesla situation. As you said, the company's really struggling, and there are a lot of reasons for that. There are no new Tesla models for people to buy, even though they were promised. There have been a bunch of recalls. People are just hesitant about buying a new EV right now anyway, for a number of reasons. But it's really, it's him that people don't like. So much like the damage that he has done to the structure of the federal government with DOGE, similarly, he has done damage to Tesla, the brand, by his association with the policies of the Trump Administration, and his cozying up to the President, and his firing, and destroying the rights of people. Katie Drummond: And isn't it also true that all of these problems with Tesla, all of the problems, aside from Elon Musk himself, those problems were happening or were poised to happen regardless, like issues with new models, with recalls, that all predates his work with DOGE, unless I'm drastically misunderstanding how time works. So, those problems with the company existed and were bound to become a bigger deal at some point, and then it really feels like his work with DOGE and the federal government just added fuel to the fire. He just poured gasoline on all of his company's problems by participating with the Trump Administration in the way that he did. But the fact that Tesla is a troubled company is old news, and has nothing to do with the fact that Elon Musk is not a well-liked individual. So, it's just problem on top of problem. Michael Calore: That's right. That's right. And the damage is done, I think, at this point. He would probably have to move on from that company in order to fully turn it around. Katie Drummond: Well, we still have a lot of time left in the year, so we'll see. Michael Calore: All right, well let's take a break and we'll come right back. Welcome back to Uncanny Valley . When we talked about DOGE at the beginning of the year, it still felt just like an idea. The tone was decidedly different. We talked about how the group was named after a meme coin, and we all had a good laugh at the absurdity of it all. It was still unclear what would happen. And of course, since then, DOGE has gutted multiple federal agencies, dismantled so many programs, fired a bunch of people, built a giant databases to track and surveil people, among other things. Katie Drummond: So, I wasn't actually with you guys on the show when you talked about DOGE in January, but I was listening to the show, and I remember you talking about Musk's plans to, quote, "open up the books and crunch the numbers to cut costs." Sounds very exciting. And cutting some of those costs, of course, had to do with laying people off. Now, I remember that because Zoë Schiffer, who hosts the other episode of Uncanny Valley , said she would be surprised if any, quote, "books were even opened." So, what did we see actually happen from that prediction to now, from January to May? Lauren Goode: I want to give Zoë a shout-out here because I think the context of that was me saying, "Oh, I wonder how they're going to go about this careful, methodical process of doing the thing." And so he was like, "This is going to be utter chaos. They're not going to open any books." Katie Drummond: She was right. It has been chaos. Lauren Goode: So we also said that the New Yorker reported Vivek had joked at one point that he was going to do a numbers game. You would lose your job if you had the wrong Social Security number. That didn't actually happen, but Zoë surmised at the time that this was potentially going to be run off of the Twitter/X playbook, run like a chaotic startup. And that's true. I definitely did think there would be more of a process to what DOGE was doing, so I was wrong. There was process. They have systematically terminated leases for federal office buildings, or taken over other buildings. They're reportedly building out this big master database. They've gutted public agencies like the CDC, and regulatory bodies like the CFPB, the Consumer Financial Protection Bureau. So they've done a lot. I think the part where I thought there would be more process was around the people, the human capital of all this, like the federal workforce. And so, maybe in a lot of ways, this is just like some startup, you're acting recklessly and worrying about the human beings you're affecting later. Michael Calore: And I think the thing that we also predicted correctly was that if DOGE has a chance to shape the regulatory agencies in the federal government, they would shape those agencies in a way that benefit people who are in their industry. Lauren Goode: Right. Katie Drummond: I think one of the questions you guys were asking back in January was whether or not the administration was bringing in these guys. It was Musk and Ramaswamy at the time, because they actually wanted them to advise on how technology is used as part of government services, as part of the way the government works, or because they thought the two would be influential over the types of regulations that are rolled back or introduced. So, man, it's crazy to even say all of that, knowing what we know now about ... It's just interesting, in January, we knew so little, we were so naive. But what do you think now about why Musk, in particular, was actually brought on board? Lauren Goode: Well, honestly, I think that they have done both. WIRED has reported that DOGE is building out a master database of sensitive information about private citizens, and a database that will reportedly help them track immigrants. And we know they're playing around with these AI agents, like you just talked about, Katie. And so, we know that they were brought in to apply that technology building mindset to government services, if you want to call it that. But I think that they also are influencing policy, because on the policy side, we've seen, I mentioned David Sacks, he's Trump's crypto and AI Czar, and he's been weighing in on cryptocurrency and stablecoin regulations. Even if that hasn't been pushed through yet, he's certainly in Trump's ear about it. Musk has also been pushing back on Trump's tariff policies. Musk has been expressing his opinion on immigration policies. Those are just a few examples, but safe to say, he has Trump's ear. Michael Calore: I think at the beginning I was cautiously interested in the IT consultant part of it, like the DOGE mission to come in and modernize the federal government. Obviously, if you've ever dealt with federal government agencies, as a person who's computer-literate, sometimes you are just completely flabbergasted by the tools that you have to use to get access to services in this country. So yes, guys, come in, do your thing, zhuzh it up, make it work better. Of course, that is absolutely not what happened. But I was excited about the prospect of that maybe happening. And it turns out that they really took the opportunity to take all of the data that are in all of these agencies and put it all together into one giant input, fed into various systems that are going to process that data and find efficiencies in ways that are probably going to affect human beings negatively. A computer is really good at doing very simple tasks over and over again. It doesn't necessarily understand the nuances of how things are divided up equitably among different sectors of society, it doesn't understand the nuances of people's personal situations. So, that's the modernization that we're going to see, I think, of government systems. And that's frightening, that wasn't what I was expecting. Katie Drummond: Now, we've talked a little bit on and off in this episode already about AI. AI has played a much bigger role with DOGE than maybe we thought it would, maybe we hoped it would, in January. So, let's talk about that. As far as we know now, what does DOGE aspire to do with AI, and how were you thinking about that in January, if you were thinking about it at all? Lauren Goode: I still feel like I don't really understand what they're trying to do with AI, frankly. Katie Drummond: Maybe they don't. Lauren Goode: We know at this point that there are AI officers and leaders in the federal government. We mentioned David Sacks before, who was put in charge of crypto and AI. There is now the first ever AI officer at the FDA, Jeremy Walsh. WIRED has reported that OpenAI and the FDA are collaborating for an AI assisted scientific review of products. Our colleague, Brian Barrett, has written about the use of AI agents. In particular, Brian wrote, "It's like asking a toddler to operate heavy machinery." Social Security Administration has been asked to incorporate an AI chatbot into their jobs. And we've also reported on how the GSA, the General Services Administration has launched something called the GSAI bot. But we also later found out that that's something that was based on an existing code base, a project that existed prior to DOGE taking over the building. I think the short answer is that when DOGE first started, we didn't really have a clear sense of how they were going to use AI. And even right now, after saying all that on this podcast, I cannot pretend to understand fully what they are doing with AI. And that's either due to a lack of transparency, or just the fact that it all seems very disparate, very scattered. I'm not going to sit here on this podcast and pretend to make sense of it. Michael Calore: With a lot of this stuff, it's hard to understand where the DOGE initiatives end, and where just other initiatives in the federal government begin. I think simply because there's a lack of transparency about how these decisions are being made, who's advising who, and who's really drafting the memos. When we think about what is AI going to do, we have to consider what an AI agent is. It is a program that can do the same work as a human being. And that's just the broad definition of it. So, you can deploy an AI agent to write emails, make phone calls, fill out paperwork, whatever it is. You're just basically doing admin work, and there is a lot of admins in the federal government, and I think that that is in our future. People have this cozy idea that their experience with AI is maybe ChatGPT or Siri, or something like that. So, "Oh, you have a problem with your taxes, you can just talk to the IRS chatbot and it'll solve it for you." That sounds like a nightmare. I can't imagine that any IRS chatbot is going to be able to solve any problems for me. It'll probably just make me mad and make the problems worse or the same. But when you think about, "Okay, here is an opportunity for us to use these AI agents in a way that will increase efficiency across the government," what you're really talking about is just we don't need these people anymore and we just need to replace them with the technology. Katie Drummond: One of the pieces of this that I think is so consequential, I remember maybe a year and a half ago, talking to a bunch of civil servants, people in decision-making roles across federal agencies, and they were all asking a lot of questions about AI. They were very curious about AI. The Biden Administration executive order had put forth all of these different demands of different agencies to investigate the potential for AI to do X, Y, or Z within their agencies. So they were in that exploratory process. They were very slow to think about how AI could be useful within those agencies, and that's for the bureaucracy reasons, but it's also because the work of these federal agencies, you don't really want to get it wrong. When we're talking about the IRS or we're talking about payments from treasury, we're talking about evaluating new drugs via the FDA, you want to be right. You want to reduce the risk of error as much as possible. And I think for so many people in technology, there's this notion that technology outdoes human performance just inevitably. It's inevitable that a system will do a better job than a human being who is fallible, who makes mistakes. That said, what we know about AI so far, generative AI in particular, is that it makes a lot of mistakes. This is very imperfect technology. AI agents are not even really ready for primetime within a private company for one individual to use in their own home, let alone inside the federal bureaucracy. So, I do think that a lot of what DOGE has done with AI, like Lauren, to your point about them building on top of this existing AI initiative at the GSA, is they're taking very preliminary work in AI at these agencies, and they're just fast tracking it. They're saying, "This is going to take three years. No, no, we're doing this in three weeks." And that's scary, given what we know about AI and how effective and how reliable it is right now. So, does anything stand out to you guys about that in the context of what we're talking about around AI and DOGE, and AI in the federal government? What are some of the risks that really stand out to you guys? Lauren Goode: I think that it is consequential when you think about AI being used in such a way that it ends up impacting people's jobs, right? Katie Drummond: Right. Lauren Goode: But I actually think that that idea of AI agents doing the jobs of humans at this point is a little bit optimistic. And when I think about what feels more consequential, is this idea of AI just becoming a code word or a buzzword for what is essentially very, very, very advanced search. So, if they are able to build this master database that creates some sort of profile of every US citizen, or every US non-citizen, and is pulling in from all these different data sources, both within government agencies, but public documents, and across the web and across social media, and anything you've ever tweeted, and anything you've ever said, and anything you've ever done, and if you've ever gotten a parking ticket or a DUI, or you've committed a crime, or anything like that, to just hoover that all into one centralized location and be able to pull that up in a citizen of the drop of a hat, that, to me, feels more consequential and potentially more dangerous than going to the Social Security website and having an annoying bot trying to answer your questions for you. Michael Calore: It's surveillance creep, really is what it is. And marry that with computer vision, like face recognition and the ability to photograph everybody who's in a car at the border, cross-reference that with government documentation like passports and driver's licenses, and you have a whole new level of surveillance that we have not dealt with before in our society. Katie Drummond: Now, not to be all negative Nelly, because we often are, but does any ... What? Michael Calore: What show are you on? Katie Drummond: You know me, the Canadian. Does anything stand out to both of you as having actually been good from all of this? So, DOGE takeover January to May, anything potentially exciting? Any bright spots, anything where we should be a little bit more generous in our assessment and say, "You know what, actually, as dystopian and scary as a lot of this is, this potentially a good thing, or this is unequivocally a good thing"? Anything like that that stands out to either of you? Lauren Goode: I would say that if there's one area where we could be a little bit more generous, it might be that if this turnaround of the federal government was something that was being done in good faith, then I might give them a pass after just five months. I might say ... Katie, you've done turnarounds before? Katie Drummond: I have. Lauren Goode: They take longer than five months, right? Katie Drummond: They do. Lauren Goode: Yes. Okay. Katie Drummond: Depends on the size of the organization. With the federal government, you're looking at five to 10 years. Lauren Goode: Right. Exactly. So there's that. In terms of the actual cuts to fraud and abuse as promised, as far as we know and has been reported by other outlets, the actual cuts that DOGE has made fall far below what Trump and Musk had promised. Initially, they said that they were going to slash $2 trillion from the federal budget. That goal was cut in half almost immediately. The latest claims are that 160 billion has been saved through firing federal workers, canceling contracts, selling off the buildings, other things. And PR just reported that the tracker on DOGE's own website is rife with errors and inaccuracies, though. The wall of receipts that DOGE has been posting totals just $63 billion in reductions, and actually, as of late March, government spending was up 10% from a year earlier. Revenue was still low. So, we're still in a deficit, in terms of federal spending. There is one thing I've heard from folks in Silicon Valley they think is a good thing. It's Musk's pushback on some of Trump's immigration policies, specifically those that affect high-tech workers. During Trump 1.0, the denial rates for H-1B visa spiked, and Trump said he wanted to end, forever, the use of H-1 B visas, he called it a cheap labor program. Now, he has flip-flopped a bit. Stephen Miller, his Homeland Security Advisor, Deputy Chief of Staff, has been pushing for more restrictions on this worker visa. But Musk, who actually understands how critical this visa is for the talent pipeline in Silicon Valley, maybe because he's an immigrant, I think has managed to sway Trump a bit on that. And so, for obvious reasons, perhaps people in Silicon Valley say, "Well, I think this is actually a good thing that Musk is doing." Michael Calore: I'll point out two things. Lauren Goode: Go ahead. Michael Calore: One, the LOLs. The press conference that they did in the Oval Office where Elon brought his child— Katie Drummond: Oh, that was good. Michael Calore: That was definitely a big highlight for me. But seriously, the other thing is that people are really engaged now. You talk to people who are somewhat politically minded, and they have opinions about government spending, they have opinions about oversight and transparency, they have opinions about what actually matters to them. Like what do they need from their government, what do they want their government to do for them. Those were all nebulous concepts even five, six months ago that I think are at the top of everybody's mind now. And I think that is a good thing. Katie Drummond: Oh, I love that. A galvanized and engaged public— Michael Calore: That's right. Katie Drummond: As a plus side to DOGE. I love it. We're going to take a quick break and we'll be right back. Michael Calore: Welcome back to Uncanny Valley . Before we wrap up, let's give the people something to think about, our recommendations. Katie, why don't you go first? Katie Drummond: I have an extremely specific recommendation. Do either of you use TikTok? Lauren Goode: I do sometimes. Michael Calore: Define use. Katie Drummond: Scroll. Lauren Goode: Yeah, scroll maybe like once every couple weeks. Katie Drummond: Do you thumb through TikTok? Michael Calore: I'm familiar with it, yes. Katie Drummond: There is an account on TikTok called Amalfi Private Jets. It is the account of a private jet company. This is the most genius marketing I have ever seen in my life. For someone who likes reality TV and trash, which is me. It's these little 60-second reality TV episodes, where the CEO of Amalfi Private Jets is on the phone or he's on a Zoom with one of his clients, often, I think her name is McKenna. She's a young, extremely wealthy, entitled little brat, and she'll call him up in the clip, he's at his office. He's young and handsome, and he's like, "Hey, McKenna." And she's like, "Hey, Colin. So, my dad said that I had to fly from Geneva to London," and blah, blah, blah. And then there's this whole dramatic narrative around McKenna and why she needs a $75,000 jet immediately, and she needs it to have vegan spinach wraps refrigerated. It's just these very dramatic little vignettes of what life is like for the rich and fabulous who are calling Amalfi Private Jets to book their private jets. So there's that account. And then, once you go down the rabbit hole of that account, the TikTok algorithm will start serving up these companion accounts they've created, like the CEO of the company has one, his girlfriend has one. I think McKenna now has one. And so, there's this little cinematic universe of Amalfi Private Jets on TikTok, and you get sucked in, and you get to know all of these people. And it's a little vertical video reality show experience that I highly recommend if you only have 60 seconds, which then turn into two hours, which then turn into pulling an all-nighter to learn everything about Amalfi Private Jets, their CEO, his girlfriend, and their wealthy clientele. This is the TikTok for you. Enjoy. Michael Calore: This is genius. Katie Drummond: Thank you. Lauren Goode: This is the reality TV of the future. Katie Drummond: It's incredible. Lauren Goode: It has arrived. Katie Drummond: And you know what? And I just did their job for them, because it's marketing for their company. They got me. Michael Calore: All right, Lauren, what's your recommendation? Lauren Goode: My recommendation might go nicely on your Amalfi Private Jet. Hear me out, peonies. You guys like flowers? Michael Calore: Oh, peonies. Lauren Goode: Peonies. Katie Drummond: I like flowers. Michael Calore: Sure. Lauren Goode: Do you like peonies? Katie Drummond: I couldn't tell one from another, but I like them. Lauren Goode: They're beautiful. It's peony season here. I'm saying that now with the O annunciated, which is how I would do if I was giving my architectural digest home tour. Michael Calore: I see. Lauren Goode: Yes, these are peonies. Katie Drummond: Oh, I'm just looking at Google images of them. They're very nice. Lauren Goode: Aren't they beautiful? Katie Drummond: They're very nice. Lauren Goode: The cool thing is they do have a very short-lived season. In this part of the world, it's typically late May through June. If you plant them, they only bloom for a short period of time. If you buy them, they're these closed balls, not to be confused with Edward Coristine "Big Balls." They're these closed balls, and then after a few days they open up and they're the most magnificent looking things. They're really, really pretty. And I got some last week at the flower shop, and when they opened, I was like, "Oh my God." It just made me so happy. And they're bright pink. And so, if you're just looking to do something nice for yourself, or someone you just want to pick up a nice little thoughtful gift for someone, get them some peonies. You know what? I didn't check to see if they're toxic to pets. So, check that first, folks. But, yes. Michael Calore: That's great. Katie Drummond: Mike, what's yours? Michael Calore: So, I'm going to recommend an app. If you follow me on Instagram, Snackfight in Instagram, you may notice that I have not posted in a long time, and that's because I stopped posting on Instagram, and I basically just use it as a direct message platform now. But there are still parts of my brain that enjoy sharing photos with my friends, so I found another app to go share photos on and it's called Retro. Lauren Goode: Yeah, Retro. Michael Calore: So, it's been around for a while, but I went casting about for other things out there, and I found that there was a group of my friends who are on Retro, and I was like, "Oh, this is great." It's very private. By default, somebody can only see back a couple of weeks. But if you would like to, you can give the other user a key, which unlocks your full profile so that they can look at all of your photos going back to the beginning of time, according to whenever you started posting on Retro. I really like that about it, the fact that when I post a photo, I know exactly who's going to see it. There are no Reels, there's no ads, there's no messaging features, there's no weird soft-core porno on there, there's no memes. It's just pictures. And I really like that. It's like riding a bicycle through the countryside after driving a car through a city. It's like a real different way to experience photo sharing, because it's exactly like the original way of experiencing photo sharing, and I'd forgotten what that feels like. Katie Drummond: Oh, it sounds lovely. Lauren Goode: What's cool about the app too is when you open it and you haven't filled out that week's photos, when you tap on it, it automatically identifies those photos from that week in your camera roll. It's like, "You shot these photos between Sunday and Saturday, and here's where you can fill this weekend." Michael Calore: And— Lauren Goode: It's pretty cool. Michael Calore: And all the photos from the week stack up. So, if you post 12 photos, and then you look at my profile, you can just tap through all 12 photos, and then that's it. That's all you get. Lauren Goode: Good job, Nathan and team. Michael Calore: Who's Nathan? Who are you shouting out? Lauren Goode: Nathan Sharp is one of the cofounders of it. He's a former Instagram guy. I think his cofounder is as well. It was founded by two ex Instagram employees. And the whole idea is they're trying to make, it's not the anti-Instagram, but it is more private. Michael Calore: Feels like the anti-Instagram right now. Lauren Goode: It's nice. It's a nice place to hang out. Michael Calore: Well, thanks to both of you for those great recommendations. Lauren Goode: Thanks, Mike, for yours. Katie Drummond: Yeah, Mike, thanks. Lauren Goode: Thanks, Mike. Katie Drummond: Bye. Lauren Goode: See you on the jet. Michael Calore: And thanks to you for listening to Uncanny Valley . If you liked what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you'd like to get in touch with us with any questions, comments, or show suggestions, please write to us at uncannyvalley@ We'd love to hear from you. Today's show is produced by Kyana Moghadam, Amar Lal at Macro Sound mixed this episode. Jake Loomis was our New York Studio engineer. Daniel Roman fact-checked this episode. Jordan Bell is our Executive Producer. Katie Drummond is WIRED's Global Editorial Director, and Chris Bannon is the Head of Global Audio.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store