
How the Loudest Voices in AI Went From ‘Regulate Us' to ‘Unleash Us'
May 30, 2025 10:00 AM Two years after Sam Altman pitched Congress on AI guardrails, he's back in Washington with a new message: To beat China, invest in OpenAI. Photo-Illustration: WIRED Staff; Photograph: J.On May 16, 2023, Sam Altman appeared before a subcommittee of the Senate Judiciary. The title of the hearing was 'Oversight of AI.' The session was a lovefest, with both Altman and the senators celebrating what Altman called AI's 'printing press moment'—and acknowledging that the US needed strong laws to avoid its pitfalls. 'We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,' he said. The legislators hung on Altman's every word as he gushed about how smart laws could allow AI to flourish—but only within firm guidelines that both lawmakers and AI builders deemed vital at that moment. Altman was speaking for the industry, which widely shared his attitude. The battle cry was 'Regulate Us!'
This is an essay from the latest edition of Steven Levy's Plaintext newsletter.
SIGN UP for Plaintext to read the whole thing, and tap Steven's unique insights and unmatched contacts for the long view on tech.
Two years later, on May 8 of this year, Altman was back in front of another group of senators. The senators and Altman were still singing the same tune, but one pulled from a different playlist. This hearing was called 'Winning the AI Race.' In DC, the word 'oversight' has fallen out of favor, and the AI discourse is no exception. Instead of advocating for outside bodies to examine AI models to assess risks, or for platforms to alert people when they are interacting with AI, committee chair Ted Cruz argued for a path where the government would not only fuel innovation but remove barriers like 'overregulation.' Altman was on board with that. His message was no longer 'regulate me' but 'invest in me.' He said that overregulation—like the rules adopted by the European Union or one bill recently vetoed in California would be 'disastrous.' 'We need the space to innovate and to move quickly,' he said. Safety guardrails might be necessary, he affirmed, but they needed to involve 'sensible regulation that does not slow us down.'
What happened? For one thing, the panicky moment just after everyone got freaked out by ChatGPT passed, and it became clear that Congress wasn't going to move quickly on AI. But the biggest development is that Donald Trump took back the White House, and hit the brakes on the Biden administration's nuanced, pro-regulation tone. The Trump doctrine of AI regulation seems suspiciously close to that of Trump supporter Marc Andreessen, who declared in his Techno Optimist Manifesto that AI regulation was literally a form of murder because 'any deceleration of AI will cost lives.' Vice President J.D. Vance made these priorities explicit in an international gathering held in Paris this February. 'I'm not here … to talk about AI safety, which was the title of the conference a couple of years ago,' he said. 'We believe that excessive regulation of the AI sector could kill a transformative industry just as it's taking off, and we'll make every effort to encourage pro-growth AI policies.' The administration later unveiled an AI Action Plan 'to enhance America's position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation.'
Two foes have emerged in this movement. First is the European Union which has adopted a regulatory regimen that demands transparency and accountability from major AI companies. The White House despises this approach, as do those building AI businesses in the US.
But the biggest bogeyman is China. The prospect of the People's Republic besting the US in the 'AI Race' is so unthinkable that regulation must be put aside, or done with what both Altman and Cruz described as a "light touch.' Some of this reasoning comes from a theory known as 'hard takeoff,' which posits that AI models can reach a tipping point where lightning-fast self-improvement launches a dizzying gyre of supercapability, also known as AGI. 'If you get there first, you dastardly person, I will not be able to catch you,' says former Google CEO Eric Schmidt, with the "you" being a competitor (Schmidt had been speaking about China's status as a leader in open source.) Schmidt is one of the loudest voices warning about this possible future. But the White House is probably less interested in the Singularity than it is in classic economic competition.
The fear of China pulling ahead on AI is the key driver of current US policy, safety be damned. The party line even objects to individual states trying to fill the vacuum of inaction with laws of their own. The version of the tax-break giving, Medicaid-cutting megabill just passed by the House included a mandated moratorium on any state-level AI legislation for 10 years . That's like eternity in terms of AI progress. (Pundits are saying that this provision won't survive some opposition in the Senate, but it should be noted that almost every Republican in the House voted for it.)
It's not surprising that Trumpworld would reject regulation and embrace a jingoistic stance on AI. But what happened to the seemingly genuine appetite in the industry for rules to ensure AI products don't run amok? I contacted several of the top AI companies this week and was pointed to published blogs and transcripts from speeches and public testimony, but no executive would go on record on the topic. (To be fair, I didn't give them much time.)
Typical of those materials was OpenAI's policy blog. It asks for 'freedom to innovate,' meaning, in all likelihood, no burdensome laws; strong export controls; and an opportunistic request for 'freedom to learn.' This is a euphemistic request for Congress to redefine intellectual property as 'fair use' so OpenAI and other companies can train their models with copyrighted materials—without compensating the creators. Microsoft is also asking for this bonanza. (Disclosure: I am on the council of the Authors Guild, which is suing OpenAI and Microsoft over the use of copyrighted books as training materials. Opinions expressed here are my own.)
The 'light-touch' (or no-touch) regulatory camp does have an excellent point to make: No one is sure how to craft laws that prevent the worst dangers of AI without slowing the pace of innovation. But aside from avoiding catastrophic risk, there are plenty of other areas where AI regulation would not introduce speed bumps to research. These involve banning certain kinds of AI surveillance, deepfakes, and discrimination; clearly informing people when they are interacting with robots; and mandating higher standards to protect personal data in AI systems. (I admit I cheated in making that list—not by using ChatGPT, but by drawing on the kinds of AI harms that the House of Representatives would not allow states to regulate.)
Public pressure, or some spectacular example of misuse, may lead Congress to address those AI issues at some point. But what lingers for me is the about-face from two years ago when serious worries about catastrophic risk dominated conversations in the AI world. The glaring exception to this is Anthropic, which still hasn't budged from a late October blog post—just days before the presidential election—that not only urged effective regulation to 'reduce catastrophic risks' but pretty much proposed the end of times if we didn't do it soon. 'Governments should urgently take action on AI policy in the next eighteen months,' it read, in boldface. 'The window for proactive risk prevention is closing fast.'
In this environment, there is virtually no chance that Anthropic will get its wish. Maybe it won't matter: It could be that fears of an AI apocalypse are way overblown. Take note, though, that the leaders of just about every single major AI company are predicting that in a few years, we will realize artificial general intelligence. When you press them, they will also admit that controlling AI, or even understanding how it works, is a work in progress. Nonetheless, the focus is now on hastening the push to more powerful AI—ostensibly to beat China.
Chinese people have made it clear they don't want to report to robot overlords any more than we do. America's top geopolitical rival has also demonstrated some interest in imposing strong safety standards. But if the United States insists on eschewing guardrails and going full-speed toward a future that it can't contain, our biggest competitor will have no choice but to do the same. May the best hard takeoff win.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
20 minutes ago
- Yahoo
AI is upending the job market, even at AI companies
AI's impact is affecting entry-level engineering roles. Anthropic CPO Mike Krieger said the company is focused on hiring more experienced employees instead. Anthropic's CEO, meanwhile, has warned that AI could eliminate 50% of entry-level jobs. Anthropic CPO Mike Krieger, who also cofounded Instagram, says the job market is going to be tough for new grads. Krieger told The New York Times' "Hard Fork" podcast on Friday that Anthropic is focused instead on hiring experienced engineers. He said he still has "some hesitancy" with entry-level workers. To some extent, that's a reflection of Anthropic's internal structure, which doesn't yet support a "really good internship program," Krieger said. Internships have long been the golden ticket to lucrative entry-level tech jobs. But it also shows how AI is upending the labor market, even at AI companies. As AI continues to evolve, Krieger said that the role of entry-level engineers is going to shift. On a recent episode of the 20VC podcast, Krieger said software engineers could see their job evolve in the next three years as coders outsource more of their work to AI. Humans will focus on "coming up with the right ideas, doing the right user interaction design, figuring out how to delegate work correctly, and then figuring out how to review things at scale — and that's probably some combination of maybe a comeback of some static analysis or maybe AI-driven analysis tools of what was actually produced." There is an exception, however. "If somebody was... extremely good at using Claude to do their work and map it out, of course, we would bring them on as well," Steve Mnich, a spokesperson for Anthropic, told Business Insider by email. Claude, Anthropic's flagship chatbot, has become known among users as a coding wizard with a manipulative streak. "So there is, I think, a continued role for people that have embraced these tools to make themselves, in many ways, as productive as a senior engineer." On its careers page, Anthropic is hiring for 200 roles across categories from AI research and engineering to communications and brand to software engineering infrastructure. BI reviewed the job descriptions for each of these roles and found that the majority require five or more years of experience, while a handful of jobs, particularly in sales, require between 1 and 2 years of experience. Anthropic CEO Dario Amodei has also warned about the threat AI poses to entry-level jobs, both inside and outside the AI industry. In an interview with Axios, Amodei said the technology could wipe out as much as 50% of entry-level jobs. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," he told the outlet. "I don't think this is on people's radar." On Thursday, he told CNN that "AI is starting to get better than humans at almost all intellectual tasks, and we're going to collectively, as a society, grapple with it." David Hsu, the CEO of Retool, an AI application company with over 10,000 customers, including Boston Consulting Group, AWS, and Databricks, is also warning of changes on the horizon. He told BI that "workers have a lot of leverage over CEOs" in the current labor market. "I think CEOs are kind of tired of that. They're like, 'We need to get to the point where we can go replace labor with AI.'" Read the original article on Business Insider Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Bloomberg
20 minutes ago
- Bloomberg
White House Prepares to Ease Rules for Big Banks, Politico Says
The Trump administration is preparing to ease rules imposed on the largest US banks following the 2008 financial crisis, Politico said, citing people with knowledge of the discussions. Regulators are close to completing a proposal that would reduce the size of the capital cushion that the biggest lenders must maintain to absorb potential losses and remain solvent in periods of economic stress, Politico reported Saturday.


CNN
24 minutes ago
- CNN
Unpacking Trump's Targeting of Universities & Foreign Students - Amanpour - Podcast on CNN Audio
Unpacking Trump's Targeting of Universities & Foreign Students Amanpour 42 mins As President Trump and his allies escalate the administration's battle on colleges, and on Harvard specifically, Steven Levitsky, Harvard professor and author of 'How Democracies Die,' joins Christiane to discuss the reshaping of knowledge in America. Then, best-selling German author Daniel Kehlmann speaks with Christiane about his new book 'The Director," exploring what it was like for artists like G.W. Pabst who made films for Joseph Goebbels and the Nazis. Marking 600 days of war between Israel and Hamas in Gaza, Christiane highlights both Jeremy Diamond's report on Israel's fight to get back the 58 remaining hostages in Hamas captivity and Oren Lieberman's story on the chaotic aid delivery to starving Palestinians this week. Christiane also talks to Wilfred Frost, son of the legendary TV host David Frost, about his father's iconic interviews with the likes of Richard Nixon, Yasser Arafat and Elton John, and a new documentary series following his storied career. From her archive, Christiane pays tribute to award-winning Brazilian photographer Sebastião Salgado. Finally, marking 45 years since CNN's founding, Christiane revisits her conversation with company founder Ted Turner about how he changed the news business forever.