Latest news with #wokeAI


WIRED
a day ago
- Business
- WIRED
Is Silicon Valley Losing Its Influence on DC?
By Zoë Schiffer and Jake Lahut Jul 28, 2025 1:14 PM This episode of Uncanny Valley covers black holes, woke AI, and the relationship between Silicon Valley billionaires and the Trump administration. Peter Thiel speaks at The Cambridge Union on May 08, 2024, in Cambridge, Cambridgeshire. Photo-Illustration: WIRED Staff; Photograph:In today's episode, WIRED's director of business and industry, Zoë Schiffer, is joined by senior writer Jake Lahut to run through five of the most important stories we published this week—from Trump's newly unveiled AI plan to how supermassive black holes could have originated. Plus, they dive into why the relationship between Silicon Valley and DC is undergoing some major changes. Mentioned in this episode: Trump's AI Action Plan Is a Crusade Against 'Bias'—and Regulation by Kate Knibbs and Will Knight Newly Discovered 'Infinity Galaxy' Could Prove How Ancient Supermassive Black Holes Formed by Jorge Garay How Trump Killed Cancer Research by Elisa Muyl and Anthony Lydgate The Great Crypto Re-Banking Has Begun by Joel Khalili The GOP's Message for Tech Billionaires: Be Like Peter Thiel by Jake Lahut You can follow Zoë Schiffer on Bluesky at @zoeschiffer and Jake Lahut on Bluesky at @ Write to us at uncannyvalley@ How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Zoë Schiffer: Hey, this is Zoë. Before we start, I want to tell you about the new WIRED subscription program. If you're already a subscriber, thank you so much for supporting us. If you haven't signed up yet, this is a great time to do so. You'll have access to newsletters with exclusive analysis from WIRED reporters and access to live stream AMAs where you can ask your most pressing questions. Head over to to learn more. Welcome to WIRED's Uncanny Valley . I'm WIRED's director of business and industry, Zoë Schiffer. Today on the show, we're bringing you five stories that you need to know about this week. And later we'll dive into our main segment on how the influence of Silicon Valley is shifting in D.C. And why Republicans think tech leaders should follow the example of Peter Thiel. I'm joined today by WIRED's senior writer, Jake LaHut. Jake Lahut: Hey, Zoë, great to be with you. First time. Zoë Schiffer: Jake, you and I were both watching Trump's speech yesterday. This was a keynote speech that he gave, an event that was hosted by the All-In Podcast, and he was talking about AI regulation. This came after he had already put out an AI action plan, which was this really long document that outlined 90 plus policies focusing on three main goals. Accelerating AI innovation, building AI infrastructure, and leading international diplomacy and security. So I guess just to start, can I get your high level take on the speech? Because I don't watch Trump a ton. And every time I do, I'm really struck by his just vibe is so fascinating and so funny to me. Jake Lahut: Oh, absolutely. So to give the listeners a little bit of background on where I come from. I have been to more Trump rallies than I can count, in person that is, covering them in a professional capacity. And the weird thing about seeing Trump speak is you can really get a feel for, one, how much he just wants to be there. And this was definitely a case of he didn't seem like he really wanted to be doing this speech. There's this difference in the cadence and tone of his voice where even if you're not watching it, where you can see him looking at the teleprompter and then looking away to riff, he has this very rote cadence when he's reading on the prompter. So I would say this was a pretty middle of the road Trump phoning it in type speech. Zoë Schiffer: Yeah. I had a one-on-one at the beginning of the speech and the person I was supposed to talk to was like, "Wait, should we cancel it? Should we push it back?" I was like, "It's Trump. I feel pretty confident that he's not going to start talking about AI until like 30 minutes in." But he really was talking about rolling back any level of regulation that we already have and definitely not imposing new regulations on AI companies, really trying to move away roadblocks so they can go innovate as quickly as possible. He framed it as an all-out race. And he talked a lot about woke AI. So maybe let's get into that a little bit because this is a sticking point for him. Jake Lahut: Yeah. And I still don't really know what he means by woke AI. I mean, we did see at the outset of some of these models, I guess going back to 2022, '23, I remember Google's had those weird renditions of the founding fathers where there were very few, if any, white ones. Maybe he's talking about that. But I'm also wondering, is he talking about going full Grok with what we're going to be supporting? And then the other thing that I just on a very high level that I found interesting was he's basically saying China doesn't play by the rules. And if they're not going to play by the rules and we want to beat China, then we can't play by any rules either. And he extends that to saying we shouldn't even be paying publishers or anyone whose work is trained not to... He had this example of like, well, you read a book and you learn something from the book, but that doesn't mean anybody should get paid. And I was like, okay. Zoë Schiffer: Right. Jake Lahut: And also, I've worked at least three publications now where my work has trained these models and I have not seen a dime. So, yeah, I was a little stupefied, to be honest, hearing that part. Zoë Schiffer: Yeah. Our Slack was absolutely blowing up when he was- Jake Lahut: It was. Zoë Schiffer: ... redefining fair use in real time. Everyone was losing their minds. So speaking of the current administration, our next story deals with its role in a critical field, which is cancer research. Our colleagues, Elisa Muyl and Anthony Lydgate analyzed how the Trump administration has erased hundreds of millions of dollars that were supposed to be used, and were being used, for cancer research. Specifically, the administration has paused an estimated $1.5 billion in funding to the National Institutes of Health, or NIH, which is the largest funder of cancer research in the world. And they've also effectively halted clinical trials of new drugs and laid off thousands of employees at the FDA and NIH and the Centers for Disease Control and Prevention. And, get this, Trump officials have reportedly maintained a list of flagged keywords that they believe should trigger program reviews. Of the NIH grants terminated so far, the 50 most common flag keywords were things like, you might expect unfortunately, trans, expression, diverse, and women. Jake Lahut: On the one hand, not surprised. But also this is the kind of thing that I do wonder how much it ends up penetrating into the actual news diets of folks who are not following this really closely. And when you look at the pie chart that we have in this story, just the sheer amount of this that's coming from Harvard is really quite staggering. And the weird part of this that's very different from Trump 1.0, it's like AI scraping meets his personal vendettas and we get these really strange results. Zoë Schiffer: Yeah. It's really interesting because I feel like there are things that, I've thought about this with Elon Musk specifically a lot. It's like he has thrived in environments where his actions, and he's an impulsive risk-taking person, have a pretty quick consequence. He can see feedback really quick. If he does something and it doesn't work, he can then pivot. But when you're in government, you don't get to see the impact of what you're doing for kind of a while. And so if cancer research grinds to a halt because of what the Trump administration is choosing not to fund or what Elon Musk's DOGE team decided to slash contracts for or funding for, we won't actually feel the impact of that for years. But the impact will be pretty devastating, you can imagine. Jake Lahut: Yeah. And even the longer term, second, third order effects of this. The brain drain aspect, I think, is going to be a very big story of this administration where think about if you're an undergrad who's looking at master's, PhD type research, it's like where are you going to go? Are you going to try to pivot your talents and go to the European Union? Are you going to just get a consulting job or try to go on Wall Street? I think that these are really difficult conversations that a lot of these researchers probably never thought they would have in what I would imagine for a lot of them is a very demanding but fulfilling field to try to literally cure cancer. Zoë Schiffer: Totally. Okay. I need to shift us to a new topic because I'm already feeling quite depressed. This one is about outer space. So my first question for you is how do you feel about outer space, Jake? And would you go if you had the chance? Normally I would say, no, thank you. I'm not interested. But I don't know after this maybe, yeah, take me to Mars. Jake Lahut: Weirdly, I would probably be more comfortable going to outer space than doing deep ocean exploration. That could be a recency bias with the- Zoë Schiffer: I was going to say. Jake Lahut: ... the Ocean Gate thing. But no, I was super into the, not just the Apollo missions, but Gemini and Mercury as a kid. I had a little beanbag chair in my house where I would pretend it was the capsule- Zoë Schiffer: Oh, my gosh. Jake Lahut: ... that was reentering the atmosphere looking for all- Zoë Schiffer: I can picture this perfectly. Jake Lahut: Yeah. So huge Al Shepard, John Glenn fan. I love all that stuff. But I am also a rather large person, and I think the fitting into the vessel would be difficult. Zoë Schiffer: Totally fair. I can't 100% guarantee you would see this if you did find a vessel that could take you, but WIRED contributor Jorge Garay reported that a team of astronomers from Yale and Copenhagen recently discovered two galaxies colliding with each other. They have called it the Infinity Galaxy. And this finding is pretty exciting because it could be the first direct evidence of how really old supermassive black holes were formed. Jake Lahut: Yeah. This is some interstellar type stuff, but it really does look like that sideways eight formation. And I found this pretty mind-blowing. Especially in our vertical video treatment of it, if you want to check this out on Instagram Reels. It's rather mesmerizing, I got to say. Zoë Schiffer: So the prominent theory of how they form is when stars run out of fuel and collapse under their own gravity. But with very old supermassive black holes, there wouldn't have been enough time for the stars to get to that point. So this Infinity Galaxy supports another theory that they were able to form from dense clumps of matter, so no star busting needed. Jake Lahut: At least for now, we are still discovering new things about science in the United States of America. And maybe that'll continue, maybe it won't. Zoë Schiffer: Well, if not, AI apparently is going to step in and do everything for us. Jake Lahut: DOGE can't go out there and catch up to the James Webb telescope at least. No one's catching that bad boy anytime soon. Zoë Schiffer: Okay. So our next story takes us back to earth and, honestly, back to the Silicon Valley elite of it all. Our colleague, Joel Khalili, reported that crypto firms are finally getting more access to banking. This is actually critically important and covers both of our areas of reporting, Jake, because de-banking was a core reason that a lot of the Marc Andreessen types in the world really soured on the Biden administration and went all in on Trump. So now we're kind of seeing the fruits of that decision play out because under the crypto-friendly Trump administration, a number of US FinTech firms are competing to offer bank accounts to these crypto firms. But they still do need to follow the ground rules set by the partner bank involved, so there's no fully escaping the traditional banking system. But curious on your take here given how critical this was to the last election. Jake Lahut: Oh, yeah. I mean, if you've ever had the pleasure of hearing Don Jr. talk about this, he gets real worked up about the de-banking. From my sourcing perspective, I remember being at the Republican convention last summer in Milwaukee, and I remember the second day there I'm bumping into some people I recognize. And every time I'm like, "Hey, what are you doing here? You're not on the Trump 2024 campaign." And maybe they were on it in 2016 or 2020. And it was like, "Nah, dude, I'm here with crypto. Yeah, check out this party later." Every time. And I'm like, "When did all of you end up working for these crypto companies?" But you got to think in the context of after January 6th, if you were a Trump former White House official or you were on those campaigns, your cash-out options by usual D.C. swamp standards were very different. So you weren't going to be going to K Street and doing lobbying in the traditional sense. You weren't going to be getting those cuss jobs on some Wall Street legislative affairs team or whatever. So for a lot of them, this was kind of the only game in town. And that's where I find this development really interesting is could this end up affecting that Republican talent cash-out pipeline in some weird way? And I remain completely perplexed about the next phase that Congress is going to try to set up some sort of market securities framework for these cryptocurrencies where right now they've basically given them the stablecoin win. Zoë Schiffer: Exactly. Okay, one more before we go to break. Yesterday we published a story about the former DOGE offices at the General Services Administration in Washington D.C. We had gotten a little tip that they were left a bit of a mess when DOGE started to vacate said offices. Near the space where DOGE previously operated, there were stacks of mattresses that still had sheets on them, there were box springs, and then there was a whole corner of baby toys. Jake Lahut: A lot, yeah. Zoë Schiffer: A lot. We ended up publishing this story on kind of the scene, and we basically just published a bunch of photos of what we saw. Jake Lahut: Well, Zoë, I think you're selling yourself short because your writing and narrative description of this scene was just something to behold. And I really think you got to read it online because the pictures are rather stunning. And I guess I would call this Dude Bro Chernobyl or something like that, where it was just like, whoa, this is the aftermath of something real serious going on here. Zoë Schiffer: I wish I'd had that line. Coming up after the break, we dive into Jake's inside scoop on how the influence of Silicon Valley has been slowly but surely shifting in D.C. Stay with us. Welcome back to Uncanny Valley . I'm Zoë Schiffer. I'm joined today by WIRED's senior writer Jake LaHut, who recently reported on the alliance between the Republican Party and Silicon Valley and how it's still going strong, but it's actively changing. Jake, this has been a long, long year so far. But it wasn't that long ago that Elon Musk was Trump's right hand, and tech leaders like Tim Cook and Jeff Bezos were attending the presidential inauguration. So what the heck has changed? Jake Lahut: Well, I think both sides of the equation here are starting to figure each other out a little more. But based on the Republican strategists and people around Trump world I talk to, I think that at the moment they feel as if they have the better end of the deal here where they got all this money that helped them rise to power and they don't feel necessarily too exploited by the tech community. But also there are a lot of lessons learned from Elon's crash out. And the main one, as strategist put this pretty succinctly, where he has this old axiom of the podiums are for the principals. The principals being the candidate, your lawmaker. And Elon really needed to stay off the podium. And because he publicly attached himself to this thing so much, especially with that Wisconsin Supreme Court race where he was out there campaigning for it. And that kind of gave the Trump White House, and Republicans more broadly, a classic AB case study of like, okay, clearly he was a drag on the party brand here. And some of these strategists think that that's going to also hurt the Republicans to some degree in the midterms no matter what they do. So that's where it gets interesting with folks like Peter Thiel and, I think, the evolving thinking of keep it behind the scenes and, more importantly, don't put all of your bets in terms of donations on a couple of candidates or in this VC mindset on safe seats. And instead, this notion of being a team player keeps coming up. House Republicans, consultants, they want the Silicon Valley donors to really spread the wealth around and to try to just shore up Republicans for the midterms. Zoë Schiffer: Right, right. Because Peter Thiel is massively influential, but he's not centering himself all the time. It's rare that he actually gives interviews. He's not super outspoken on social media. So he's more of this shadowy figure, which is a huge contrast to Elon Musk who really made himself the main character in the Trump administration. Jake Lahut: Yeah. And obviously the exception to Thiel being behind the scenes would be this interview we referenced in the story with Ross Douthat of the New York Times, where he got into all sorts of zany wild stuff. But for the most part, he's really kind of Mr. Incognito, especially among the Republican base. So we had a source in the story basically saying most voters don't know who the fuck Peter Thiel is, and that works to their advantage. Zoë Schiffer: It's so interesting because I feel like one thing that happened with all of these tech leaders and Biden was that they felt like the relationship, the promise had broken down. They were being supportive, they were giving money- Jake Lahut: Yes. Zoë Schiffer: ... and yet the Biden administration, and Biden in particular, were coming out and really slamming them and going after them. There were investigations, they had Lina Khan who was targeting them for alleged antitrust violations. So it felt like a very hostile dynamic. And I think when they looked at Trump, they were like, maybe they disagreed with him on a number of policies, but it felt very pay to play. They were like, if I support him, if I put in money, I kind of know what I'm going to get. But I'm curious if you think that that part of their equation has actually worked out for them so far. Jake Lahut: Yeah. This is where I think there's a learning curve for not just the Silicon Valley billionaires, but these broader, newer donors from the tech world where they have that kind of VC disruptor return on investment mindset. And the Republican consultants and strategists who I talk to are describing to me how you kind of got to sit these guys down and be like, "Look, you can't just come in here and say I give you X amount of money, you give me Y." You need to be involved in the longterm, and maintaining this relationship is good for both of us. There was a quote that didn't make into the story where someone said that there's probably like a 5% range of politicians where if you add a zero to the donation, they will do exactly what you want. But most of these guys have been in the game long enough, if you're a House Republican, Capitol Hill or whatever, where you really can't be tipping your hand too much with that pay to play thing. And this is sort of the ongoing re-education of the valley from Republicans about how this stuff actually works. Zoë Schiffer: I can feel that as a headline for one of your future calls. Jake Lahut: Thank you. Zoë Schiffer: It feels like the crypto wing is still super important to the Republican Party. A source told you that crypto might be the glue that is keeping the tech world tied to politics. Is that what you're hearing from sources? Jake Lahut: Yeah. And I think that that's twofold. One is obviously that it remains very profitable for the Trump family in terms of the mean coins and all that. And then the other is just the sheer amount of money they were able to pump in with these packs. And I think what's very distinct about the crypto donations is that most industries, take oil and gas or your typical Republican money machines, they want to advertise about their issue in their industry. The crypto money that came in, a lot of it went towards stuff that was totally unrelated to cryptocurrency, and that ended up being very valuable and flexible for Republicans. However, a lot of the folks who are giving this money in the crypto space were kind of like apolitical libertarians, and I think now they're a little confused, a little impatient. And another quote we had in the story was that there's just a lot of bumbling and fumbling among the crypto crowd. Were they've really got to do a lot of catching up compared to the other sub industries coming out of the valley. Zoë Schiffer: Right, right. So the GOP seems to be doing this balancing act of keeping the tech industry close while maintaining just enough distance to avoid crash-outs like the one that we saw with Elon Musk. So what's at stake for them as they navigate this? Jake Lahut: It's hard to make a prediction about the midterms and the impact that any potential drop-off in donations or whatever would have there. But in terms of the Trump base, which is already going through it with the Jeffrey Epstein saga, I think they've got to be careful about broadcasting and telegraphing too much chumminess and proximity here. There's a reason why the Biden White House decided to have this posture against big tech because they believe that most Americans, and certainly a lot of independent voters, have become much more skeptical and distrustful of just this broad notion of big tech. So when you look at someone like JD Vance who kind of has this whole money train on lock, at least to start out, going into the 2028 Republican Presidential primary where the base, and by the base I mean people who vote in Republican primaries, they tend to be very distrustful of elites any sort. And suddenly JD is going to be the establishment and his connections with Peter Thiel and all these things are going to be more threaded over and more well known. So that would be the bigger risk, I think. It's more of a vibe aspect than the money train. Zoë Schiffer: Jake, thank you so much for joining me today. Jake Lahut: Great to be with you, Zoë. Thanks so much. Zoë Schiffer: That's our show for today. We'll link to all the stories we spoke about in the show notes. Make sure to check out Thursday's episode of Uncanny Valley which is about the growing industry of brain-computer interfaces. Adriana Tapia produced this episode. Amar Lal at Macrosound mixed this episode. Pran Bandi is our New York Studio engineer. Kate Osborn is our executive producer. Condé Nast head of global audio is Chris Bannon. And Katie Drummond is WIRED's global editorial director.


The Verge
5 days ago
- Business
- The Verge
Breaking down Trump's big gift to the AI industry
President Donald Trump's plan to promote America's AI dominance involves discouraging 'woke AI,' slashing state and federal regulations, and laying the groundwork to rapidly expand AI development and adoption. Trump's proposal, released on July 23rd, is a sweeping endorsement of the technology, full of guidance that ranges from specific executive actions to directions for future research. Some of the new plan's provisions (like promoting open-source AI) have garnered praise from organizations that are often broadly critical of Trump, but the loudest acclaim has come from tech and business groups, whose members stand to gain from fewer restrictions on AI. 'The difference between the Trump administration and Biden's is effectively night and day,' says Patrick Hedger, director of policy at tech industry group NetChoice. 'The Biden administration did everything it could to command and control the fledgling but critical sector … The Trump AI Action Plan, by contrast, is focused on asking where the government can help the private sector, but otherwise, get out of the way.' Others are far more ambivalent. Future of Life Institute, which led an Elon Musk-backed push for an AI pause in 2023, said it was heartened to see the Trump administration acknowledge serious risks, like bioweapons or cyberattacks, could be exacerbated by AI. 'However, the White House must go much further to safeguard American families, workers, and lives,' says Anthony Aguirre, FLI's executive director. 'By continuing to rely on voluntary safety commitments from frontier AI corporations, it leaves the United States at risk of serious accidents, massive job losses, extreme concentrations of power, and the loss of human control. We know from experience that Big Tech promises alone are simply not enough.' For now, here are the ways that Trump aims to promote AI. Congress failed to pass a moratorium on states enforcing their own AI laws as part of a recent legislative package. But a version of that plan was resurrected in this document. 'AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level,' the plan says. 'The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states' rights to pass prudent laws that are not unduly restrictive to innovation.' To do this, it suggests federal agencies that dole out 'AI-related discretionary funding' should 'limit funding if the state's AI regulatory regimes may hinder the effectiveness of that funding or award.' It also suggests the Federal Communications Commission (FCC) 'evaluate whether state AI regulations interfere with the agency's ability to carry out its obligations and authorities under the Communications Act of 1934.' The Trump administration also wants the Federal Trade Commission (FTC) to take a hard look at existing AI regulations and agreements to see what it can scale back. It recommends the agency reevaluate investigations launched during the Biden administration 'to ensure that they do not advance theories of liability that unduly burden AI innovation,' and suggests it could throw out burdensome aspects of existing FTC agreements. Some AI-related actions taken during the Biden administration that the FTC might now reconsider include banning Rite Aid's use of AI facial recognition that allegedly falsely identified shoplifters, and taking action against AI-related claims the agency previously found to be deceptive. Trump's plan includes policies designed to help encode his preferred politics in the world of AI. He's ordered a revision of the Biden-era National Institute of Standards and Technology (NIST) AI Risk Management Framework — a voluntary set of best practices for designing safe AI systems — removing 'references to misinformation, Diversity, Equity, and Inclusion, and climate change.' (The words 'misinformation' and 'climate change' don't actually appear in the framework, though misinformation is discussed in a supplementary file.) In addition to that, a new executive order bans federal agencies from procuring what Trump deems 'woke AI' or large language models 'that sacrifice truthfulness and accuracy to ideological agendas,' including things like racial equity. This section of the plan 'seems to be motivated by a desire to control what information is available through AI tools and may propose actions that would violate the First Amendment,' says Kit Walsh, director of the Electronic Frontier Foundation (EEF). 'The plan seeks to require that 'the government only contracts with' developers who meet the administration's ideological criteria. While the government can choose to purchase only services that meet such criteria, it cannot require that developers refrain from also providing non-government users other services conveying other ideas.' The administration describes the slow uptake of AI tools across the economy, including in sensitive areas like healthcare, as a 'bottleneck to harnessing AI's full potential.' The plan describes this cautious approach as one fueled by 'distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards.' To promote the use of AI, the White House encourages a ''try-first' culture for AI across American industry.' This includes creating domain-specific standards for adopting AI systems and measuring productivity increases, as well as regularly monitoring how US adoption of AI compares to international competitors. The White House also wants to integrate AI tools throughout the government itself, including by detailing staff with AI expertise at various agencies to other departments in need of that talent, training government employees on AI tools, and giving agencies ample access to AI models. The plan also specifically calls out the need to 'aggressively adopt AI within its Armed Forces,' including by introducing AI curricula at military colleges and using AI to automate some work. All this AI adoption will profoundly change the demand for human labor, the plan says, likely eliminating or fundamentally changing some jobs. The plan acknowledges that the government will need to help workers prepare for this transition period by retraining people for more in-demand roles in the new economy and providing tax benefits for certain AI training courses. On top of preparing to transition workers from traditional jobs that might be upended by AI, the plan discusses the need to train workers for the additional roles that might be created by it. Among the jobs that might be needed for this new reality are 'electricians, advanced HVAC technicians, and a host of other high-paying occupations,' the plan says. The administration says it wants to 'create a supportive environment for open models,' or AI models that allow users to modify the code that underpins them. Open models have certain 'pros,' like being more accessible to startups and independent developers. Groups like EFF and the Center for Democracy and Technology (CDT), which were critical of many other aspects of the plan, applauded this part. EFF's Walsh called it a 'positive proposal' to promote 'the development of open models and making it possible for a wider range of people to participate in shaping AI research and development. If implemented well, this could lead to a greater diversity of viewpoints and values reflected in AI technologies, compared to a world where only the largest companies and agencies are able to develop AI.' That said, there are also serious 'cons' to the approach that the AI Action Plan didn't seem to get into. For instance, the nature of open models makes them easier to trick and misalign for purposes like creating misinformation on a large scale, or chemical or biological weapons. It's easier to get past built-in safeguards with such models, and it's important to think critically about the tradeoffs before taking steps to drive open-source and open-weight model adoption at scale. Trump signed an executive order on July 23rd meant to fast track permitting for data center projects. The EO directs the commerce secretary to 'launch an initiative to provide financial support' that could include loans, grants, and tax incentives for data centers and related infrastructure projects. Following a similar move by former President Joe Biden, Trump's plan directs agencies to identify federal lands suitable for the 'large-scale development' of data centers and power generation. The EO tells the Department of Defense to identify suitable sites on military installations and the Environmental Protection Agency (EPA) to identify polluted Superfund and Brownfield sites that could be reused for these projects. The Trump administration is hellbent on dismantling environmental regulations, and the EO now directs the EPA to modify rules under the Clean Air Act, Clean Water Act, and Toxic Substances Control Act to expedite permitting for data center projects. The EO and the AI plan, similar to a Biden-era proposal, direct agencies to create 'categorical exclusions' for federally supported data center projects that would exclude them from detailed environmental reviews under the National Environmental Policy Act. And they argue for using new AI tools to speed environmental assessments and applying the 'Fast-41 process' to data center projects to streamline federal permitting. The Trump administration is basically using the AI arms race as an excuse to slash environmental regulations for data centers, energy infrastructure, and computer chip factories. Last week, the administration exempted coal-fired power plants and facilities that make chemicals for semiconductor manufacturing from Biden-era air pollution regulations. The plan admits that AI is a big factor 'increasing pressures on the [power] grid.' Electricity demand is rising for the first time in more than a decade in the US, thanks in large part to data centers — a trend that could trigger blackouts and raise Americans' electricity bills. Trump's AI plan lists some much-needed fixes to stabilize the grid, including upgrading power lines and managing how much electricity consumers use when demand spikes. But the administration is saying that the US needs to generate more electricity to power AI just as it's stopping renewable energy growth, which is like trying to win a race in a vehicle with no front wheels. It wants to meet growing demand with fossil fuels and nuclear energy. 'We will continue to reject radical climate dogma,' the plan says. It argues for keeping existing, mostly fossil-fueled power plants online for longer and limiting environmental reviews to get data centers and new power plants online faster. The lower cost of gas generation has been killing coal power plants for years, but now a shortage of gas turbines could stymie Trump's plans. New nuclear technologies that tech companies are investing in for their data centers probably won't be ready for commercial deployment until the 2030s at the earliest. Republicans, meanwhile, have passed legislation to hobble the solar and wind industries that have been the fastest-growing sources of new electricity in the US. 'Prioritize fundamental advancements in AI interpretability' The Trump administration accurately notes that while developers and engineers know how today's advanced AI models work in a big-picture way, they 'often cannot explain why a model produced a specific output. This can make it hard to predict the behavior of any specific AI system.' It's aiming to fix that, at least when it comes to some high-stakes use cases. The plan states that the lack of AI explainability and predictability can lead to issues in defense, national security, and 'other applications where lives are at stake,' and it aims to promote 'fundamental breakthroughs on these research problems.' The plan's recommended policy actions include launching a tech development program led by the Defense Advanced Research Projects Agency to advance AI interpretability, control systems, and security. It also said the government should prioritize fundamental advancements in such areas in its upcoming National AI R&D Strategic Plan and, perhaps most specifically, that the DOD and other agencies should coordinate an AI hackathon to allow academics to test AI systems for transparency, effectiveness, and vulnerabilities. It's true that explainability and unpredictability are big issues with advanced AI. Elon Musk's xAI, which recently scored a large-scale contract with the DOD, recently struggled to stop its Grok chatbot from spouting pro-Hitler takes — so what happens in a higher-stakes situation? But the government seems unwilling to slow down while this problem is addressed. The plan states that since 'AI has the potential to transform both the warfighting and back-office operations of the DOD,' the US 'must aggressively adopt AI within its Armed Forces if it is to maintain its global military preeminence.' The plan also discusses how to better evaluate AI models for performance and reliability, like publishing guidelines for federal agencies to conduct their own AI system evaluations for compliance and other reasons. That's something most industry leaders and activists support greatly, but it's clear what the Trump administration has in mind will lack a lot of the elements they have been pushing for. Evaluations likely will focus on efficiency and operations, according to the plan, and not instances of racism, sexism, bias, and downstream harms. Courtrooms and AI tools mix in strange ways, from lawyers using hallucinated legal citations to an AI-generated appearance of a deceased victim. The plan says that 'AI-generated media' like fake evidence 'may present novel challenges to the legal system,' and it briefly recommends the Department of Justice and other agencies issue guidance on how to evaluate and deal with deepfakes in federal evidence rules. Finally, the plan recommends creating new ways for the research and academic community to access AI models and compute. The way the industry works right now, many companies, and even academic institutions, can't access or pay for the amount of compute they need on their own, and they often have to partner with hyperscalers — providers of large-scale cloud computing infrastructure, like Amazon, Google, and Microsoft — to access it. The plan wants to fix that issue, saying that the US 'has solved this problem before with other goods through financial markets, such as spot and forward markets for commodities.' It recommends collaborating with the private sector, as well as government departments and the National Science Foundation's National AI Research Resource pilot to 'accelerate the maturation of a healthy financial market for compute.' It didn't offer any specifics or additional plans for that. Posts from this author will be added to your daily email digest and your homepage feed. See All by Lauren Feiner Posts from this author will be added to your daily email digest and your homepage feed. See All by Justine Calma Posts from this author will be added to your daily email digest and your homepage feed. See All by Hayden Field Posts from this author will be added to your daily email digest and your homepage feed. See All by Adi Robertson Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Analysis Posts from this topic will be added to your daily email digest and your homepage feed. See All Policy Posts from this topic will be added to your daily email digest and your homepage feed. See All Politics Posts from this topic will be added to your daily email digest and your homepage feed. See All Regulation

News.com.au
6 days ago
- Business
- News.com.au
Donald Trump signs executive orders ramping up AI exports with caveat of ending unchecked ‘woke' technology
Donald Trump has signed three executive orders he claims will crown America victors of the 'AI race,' including a hard stance on 'woke' artificial intelligence. The president and his administration unveiled its AI action plan, 'Winning the AI Race,' in Washington which look to lift the restrictions and 'red tape' of safeguards and standards on AI from former President Joe Biden that limited its development. The order announced the ramped up export and sale of US AI software and hardware overseas and looks to speed up construction of data centres in the US running AI products. 'America is the country that started the AI race. And as president of the United States, I'm here today to declare that America is going to win it,' Trump declared. 'We also have to have a single federal standard, not 50 different states regulating this industry in the future. 'America must once again be a country where innovators are rewarded with a green light, not strangled with red tape'. The 24-page plan includes over 90 recommendations as the US looks to get ahead of countries like China also quickly developing AI models in hopes of becoming the global leader for the new tech. 'Winning this competition will be a test of our capacities unlike anything since the dawn of the space age.' The order is also a part of Mr Trump's expansive campaign targeting institutions such as schools and agencies promoting diversity. 'The American people do not want woke Marxist lunacy in the AI models, and neither do other countries,' he said. The move now extends to Mr Trump's longstanding grievances with tech companies that Republicans have accused of quashing right wing principles. With AI's output over the internet becoming near unrestricted and unchecked in recent times, companies will need to comply with the plan to ensure their restrictions are eased. On top of loosened restrictions, which the tech industry has been pushing for, the president's order also emphasised the AI race as one of geopolitical supremacy. With China already investing billions into manufacturing for AI products and datacentres, their output is expected to be ahead of US output thus far, which the Trump administration hopes will change. The most bizarre moment came with Mr Trump even suggesting a change of name for AI, claiming he doesn't like the word 'artificial.' 'I can't stand it,' he said. 'I don't even like the name, you know? I don't like anything that's artificial. So could we straighten that out, please? We should change the name. I actually mean that. 'It's not artificial. It's genius.'


CNN
6 days ago
- Politics
- CNN
Are AI models ‘woke'? The answer isn't so simple
President Donald Trump wants to make the United States a leader in artificial intelligence – and that means scrubbing AI models of what he believes are 'woke' ideals. The president on Wednesday said he signed an executive order prohibiting the federal government from procuring AI technology that has 'been infused with partisan bias or ideological agendas such as critical race theory.' It's an indication that his push against diversity, equity and inclusion is now expanding to the technology that some expect to be as critical for finding information online as the search engine. The move is part of the White House's AI action plan announced on Wednesday, a package of initiatives and policy recommendations meant to push the US forward in AI. The 'preventing woke AI in the federal government' executive order requires government-used AI large language models – the type of models that power chatbots like ChatGPT – adhere to Trump's 'unbiased AI principles,' including that AI be 'truth-seeking' and show 'ideological neutrality.' 'From now on, the US government will deal only with AI that pursues truth, fairness and strict impartiality,' he said during the event. It brings up an important question: Can AI be ideologically biased, or 'woke?' It's not such a straightforward answer, according to experts. AI models are largely a reflection of the data they're trained on, the feedback they receive during that training process and the instructions they're given – all of which influence whether an AI chatbot provides an answer that seems 'woke,' which is itself a subjective term. That's why bias in general, political or not, has been a sticking point for the AI industry. 'AI models don't have beliefs or biases the way that people do, but it is true that they can exhibit biases or systematic leanings, particularly in response to certain queries,' Oren Etzioni, former CEO of the Seattle-based AI research nonprofit the Allen Institute for Artificial Intelligence, told CNN. Trump's executive order includes two 'unbiased AI principles.' The first one, called 'truth seeking,' says large language models – the type of models that power chatbots like ChatGPT – should 'be truthful in seeking factual information or analysis.' That means they should prioritize factors like historical accuracy and scientific inquiry when asked for factual answers, according to the order. The second principle, 'ideological neutrality,' says large language models used for government work should be 'neutral' and 'nonpartisan' and that they shouldn't manipulate responses 'in favor of ideological dogmas such as DEI.' 'In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex,' the executive order says. Developers shouldn't 'intentionally code partisan or ideological judgements' into the model's responses unless the user prompts them to do so, the order says. The focus is primarily on AI models procured by the government, as the order says the federal government should be 'hesitant to regulate the functionality of AI models in the private marketplace.' But many major technology companies have contracts with the government; Google, OpenAI, Anthropic and xAI were each awarded $200 million to 'accelerate Department of Defense adoption of advanced AI capabilities' earlier this month, for example. The new directive builds on Trump's longstanding claims of bias in the tech industry. In 2019, during Trump's first term, the White House urged social media users to file a report if they believe they've been 'censored or silenced online' on sites like Twitter, now named X, and Facebook because of political bias. However, Facebook data found in 2020 that conservative news content significantly outperformed more neutral content on the platform. Trump also signed an executive order in 2020 targeting social media companies after Twitter labeled two of his posts as potentially misleading. On Wednesday, Senator Edward Markey (D-Massachusetts) said he sent letters to the CEOs of Google parent Alphabet, Anthropic, OpenAI, Meta, Microsoft and xAI, pushing back against Trump's 'anti-woke AI actions.' 'Even if the claims of bias were accurate, the Republicans' effort to use their political power — both through the executive branch and through congressional investigations — to modify the platforms' speech is dangerous and unconstitutional,' he wrote. While bias can mean different things to different people, some data suggests people see political bents in certain AI responses. A paper from the Stanford Graduate School of Business published in May found that Americans view responses from certain popular AI models as being slanted to the left. Brown University research from October 2024 also found that AI tools can be altered to take stances on political topics. 'I don't know whether you want to use the word 'biased' or not, but there's definitely evidence that, by default, when they're not personalized to you … the models on average take left wing positions,' said Andrew Hall, a professor of political economy at Stanford Graduate School of Business who worked on the May research paper. That's likely because of how AI chatbots learn to formulate responses: AI models are trained on data, such as text, videos and images from the internet and other sources. Then humans provide feedback to help the model determine the quality of its answers. Changing AI models to tweak their tone could also result in unintended side effects, Himanshu Tyagi, a professor at the Indian Institute of Science and co-founder of AI company Sentient, previously told CNN. One adjustment, for example, might cause another unexpected change in how a model works. 'The problem is that our understanding of unlocking this one thing while affecting others is not there,' Tyagi told CNN earlier this month. 'It's very hard.' Elon Musk's Grok AI chatbot spewed antisemitism in response to user prompts earlier this month. The outburst happened after xAI — the Musk-led tech company behind Grok — added instructions for the model to 'not shy away from making claims which are politically incorrect,' according to system prompts for the chatbot publicly available on software developer platform Github and spotted by The Verge. xAI apologized for the chatbot's behavior and attributed it to a system update. In other instances, AI has struggled with accuracy. Last year, Google temporarily paused its Gemini chatbot's ability to generate images of humans after it was criticized for creating images that included people of color in contexts that were historically inaccurate. Hall, the Stanford professor, has a theory about why AI chatbots may produce answers that people view as slanted to the left: Tech companies may have put extra guardrails in place to prevent their chatbots from producing content that could be deemed offensive. 'I think the companies were kind of like guarding against backlash from the left for a while, and those policies may have further created this sort of slanted output,' he said. Experts say vague descriptions like 'ideological bias' will make it challenging to shape and enforce new policy. Will there be a new system for evaluating whether an AI model has ideological bias? Who will make that decision? The executive order says vendors would comply with the requirement by disclosing the model's system prompt, or set of backend instructions that guide how LLM's respond to queries, along with its 'specifications, evaluations or other relevant documentation.' But questions still remain about how the administration will determine whether models adhere to the principles. After all, avoiding some topics or questions altogether could be perceived as a political response, said Mark Riedl, a professor of computing at the Georgia Institute of Technology. It may also be possible to work around constraints like these by simply commanding a chatbot to respond like a Democrat or Republican, said Sherief Reda, a professor of engineering and computer science at Brown University who worked on its 2024 paper about AI and political bias. For AI companies looking to work with the government, the order could be yet another requirement companies would have to meet before shipping out new AI models and services, which could slow down innovation – the opposite of what Trump is trying to achieve with his AI action plan. 'This type of thing… creates all kinds of concerns and liability and complexity for the people developing these models — all of a sudden, they have to slow down,' said Etzioni.


CNN
6 days ago
- Politics
- CNN
Are AI models ‘woke'? The answer isn't so simple
President Donald Trump wants to make the United States a leader in artificial intelligence – and that means scrubbing AI models of what he believes are 'woke' ideals. The president on Wednesday said he signed an executive order prohibiting the federal government from procuring AI technology that has 'been infused with partisan bias or ideological agendas such as critical race theory.' It's an indication that his push against diversity, equity and inclusion is now expanding to the technology that some expect to be as critical for finding information online as the search engine. The move is part of the White House's AI action plan announced on Wednesday, a package of initiatives and policy recommendations meant to push the US forward in AI. The 'preventing woke AI in the federal government' executive order requires government-used AI large language models – the type of models that power chatbots like ChatGPT – adhere to Trump's 'unbiased AI principles,' including that AI be 'truth-seeking' and show 'ideological neutrality.' 'From now on, the US government will deal only with AI that pursues truth, fairness and strict impartiality,' he said during the event. It brings up an important question: Can AI be ideologically biased, or 'woke?' It's not such a straightforward answer, according to experts. AI models are largely a reflection of the data they're trained on, the feedback they receive during that training process and the instructions they're given – all of which influence whether an AI chatbot provides an answer that seems 'woke,' which is itself a subjective term. That's why bias in general, political or not, has been a sticking point for the AI industry. 'AI models don't have beliefs or biases the way that people do, but it is true that they can exhibit biases or systematic leanings, particularly in response to certain queries,' Oren Etzioni, former CEO of the Seattle-based AI research nonprofit the Allen Institute for Artificial Intelligence, told CNN. Trump's executive order includes two 'unbiased AI principles.' The first one, called 'truth seeking,' says large language models – the type of models that power chatbots like ChatGPT – should 'be truthful in seeking factual information or analysis.' That means they should prioritize factors like historical accuracy and scientific inquiry when asked for factual answers, according to the order. The second principle, 'ideological neutrality,' says large language models used for government work should be 'neutral' and 'nonpartisan' and that they shouldn't manipulate responses 'in favor of ideological dogmas such as DEI.' 'In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex,' the executive order says. Developers shouldn't 'intentionally code partisan or ideological judgements' into the model's responses unless the user prompts them to do so, the order says. The focus is primarily on AI models procured by the government, as the order says the federal government should be 'hesitant to regulate the functionality of AI models in the private marketplace.' But many major technology companies have contracts with the government; Google, OpenAI, Anthropic and xAI were each awarded $200 million to 'accelerate Department of Defense adoption of advanced AI capabilities' earlier this month, for example. The new directive builds on Trump's longstanding claims of bias in the tech industry. In 2019, during Trump's first term, the White House urged social media users to file a report if they believe they've been 'censored or silenced online' on sites like Twitter, now named X, and Facebook because of political bias. However, Facebook data found in 2020 that conservative news content significantly outperformed more neutral content on the platform. Trump also signed an executive order in 2020 targeting social media companies after Twitter labeled two of his posts as potentially misleading. On Wednesday, Senator Edward Markey (D-Massachusetts) said he sent letters to the CEOs of Google parent Alphabet, Anthropic, OpenAI, Meta, Microsoft and xAI, pushing back against Trump's 'anti-woke AI actions.' 'Even if the claims of bias were accurate, the Republicans' effort to use their political power — both through the executive branch and through congressional investigations — to modify the platforms' speech is dangerous and unconstitutional,' he wrote. While bias can mean different things to different people, some data suggests people see political bents in certain AI responses. A paper from the Stanford Graduate School of Business published in May found that Americans view responses from certain popular AI models as being slanted to the left. Brown University research from October 2024 also found that AI tools can be altered to take stances on political topics. 'I don't know whether you want to use the word 'biased' or not, but there's definitely evidence that, by default, when they're not personalized to you … the models on average take left wing positions,' said Andrew Hall, a professor of political economy at Stanford Graduate School of Business who worked on the May research paper. That's likely because of how AI chatbots learn to formulate responses: AI models are trained on data, such as text, videos and images from the internet and other sources. Then humans provide feedback to help the model determine the quality of its answers. Changing AI models to tweak their tone could also result in unintended side effects, Himanshu Tyagi, a professor at the Indian Institute of Science and co-founder of AI company Sentient, previously told CNN. One adjustment, for example, might cause another unexpected change in how a model works. 'The problem is that our understanding of unlocking this one thing while affecting others is not there,' Tyagi told CNN earlier this month. 'It's very hard.' Elon Musk's Grok AI chatbot spewed antisemitism in response to user prompts earlier this month. The outburst happened after xAI — the Musk-led tech company behind Grok — added instructions for the model to 'not shy away from making claims which are politically incorrect,' according to system prompts for the chatbot publicly available on software developer platform Github and spotted by The Verge. xAI apologized for the chatbot's behavior and attributed it to a system update. In other instances, AI has struggled with accuracy. Last year, Google temporarily paused its Gemini chatbot's ability to generate images of humans after it was criticized for creating images that included people of color in contexts that were historically inaccurate. Hall, the Stanford professor, has a theory about why AI chatbots may produce answers that people view as slanted to the left: Tech companies may have put extra guardrails in place to prevent their chatbots from producing content that could be deemed offensive. 'I think the companies were kind of like guarding against backlash from the left for a while, and those policies may have further created this sort of slanted output,' he said. Experts say vague descriptions like 'ideological bias' will make it challenging to shape and enforce new policy. Will there be a new system for evaluating whether an AI model has ideological bias? Who will make that decision? The executive order says vendors would comply with the requirement by disclosing the model's system prompt, or set of backend instructions that guide how LLM's respond to queries, along with its 'specifications, evaluations or other relevant documentation.' But questions still remain about how the administration will determine whether models adhere to the principles. After all, avoiding some topics or questions altogether could be perceived as a political response, said Mark Riedl, a professor of computing at the Georgia Institute of Technology. It may also be possible to work around constraints like these by simply commanding a chatbot to respond like a Democrat or Republican, said Sherief Reda, a professor of engineering and computer science at Brown University who worked on its 2024 paper about AI and political bias. For AI companies looking to work with the government, the order could be yet another requirement companies would have to meet before shipping out new AI models and services, which could slow down innovation – the opposite of what Trump is trying to achieve with his AI action plan. 'This type of thing… creates all kinds of concerns and liability and complexity for the people developing these models — all of a sudden, they have to slow down,' said Etzioni.