Latest news with #GlobalAIGovernance


WIRED
31-07-2025
- Business
- WIRED
Inside the Summit Where China Pitched Its AI Agenda to the World
Jul 31, 2025 11:04 AM Behind closed doors, Chinese researchers are laying the groundwork for a new global AI agenda—without input from the US. Three days after the Trump administration published its much-anticipated AI action plan, the Chinese government put out its own AI policy blueprint. Was the timing a coincidence? I doubt it. China's 'Global AI Governance Action Plan' was released on July 26, the first day of the World Artificial Intelligence Conference (WAIC), the largest annual AI event in China. Geoffrey Hinton and Eric Schmidt were among the many Western tech industry figures who attended the festivities in Shanghai. Our WIRED colleague Will Knight was also on the scene. The vibe at WAIC was the polar opposite of Trump's America-first, regulation-light vision for AI, Will tells me. In his opening speech, Chinese Premier Li Qiang made a sobering case for the importance of global cooperation on AI. He was followed by a series of prominent Chinese AI researchers, who gave technical talks highlighting urgent questions the Trump administration appears to be largely brushing off. Zhou Bowen, leader of the Shanghai AI Lab, one of China's top AI research institutions, touted his team's work on AI safety at WAIC. He also suggested the government could play a role in monitoring commercial AI models for vulnerabilities. In an interview with WIRED, Yi Zeng, a professor at the Chinese Academy of Sciences and one of the country's leading voices on AI, said that he hopes AI safety organizations from around the world find ways to collaborate. 'It would be best if the UK, US, China, Singapore, and other institutes come together,' he said. The conference also included closed-door meetings about AI safety policy issues. Speaking after he attended one such confab, Paul Triolo, a partner at the advisory firm DGA-Albright Stonebridge Group, told WIRED that the discussions had been productive, despite the noticeable absence of American leadership. With the US out of the picture, 'a coalition of major AI safety players, co-led by China, Singapore, the UK, and the EU, will now drive efforts to construct guardrails around frontier AI model development,' Triolo told WIRED. He added that it wasn't just the US government that was missing: Of all the major US AI labs, only Elon Musk's xAI sent employees to attend the WAIC forum. Many Western visitors were surprised to learn how much of the conversation about AI in China revolves around safety regulations. 'You could literally attend AI safety events nonstop in the last seven days. And that was not the case with some of the other global AI summits,' Brian Tse, founder of the Beijing-based AI safety research institute Concordia AI, told me. Earlier this week, Concordia AI hosted a day-long safety forum in Shanghai with famous AI researchers like Stuart Russel and Yoshua Bengio. Switching Positions Comparing China's AI blueprint with Trump's action plan, it appears the two countries have switched positions. When Chinese companies first began developing advanced AI models, many observers thought they would be held back by censorship requirements imposed by the government. Now, US leaders say they want to ensure homegrown AI models 'pursue objective truth,' an endeavor that, as my colleague Steven Levy wrote in last week's Backchannel newsletter, is 'a blatant exercise in top-down ideological bias.' China's AI action plan, meanwhile, reads like a globalist manifesto: It recommends that the United Nations help lead international AI efforts and suggests governments have an important role to play in regulating the technology. Although their governments are very different, when it comes to AI safety, people in China and the US are worried about many of the same things: model hallucinations, discrimination, existential risks, cybersecurity vulnerabilities, etc. Because the US and China are developing frontier AI models 'trained on the same architecture and using the same methods of scaling laws, the types of societal impact and the risks they pose are very, very similar,' says Tse. That also means academic research on AI safety is converging in the two countries, including in areas like scalable oversight (how humans can monitor AI models with other AI models) and the development of interoperable safety testing standards. But Chinese and American leaders have demonstrated they have very different attitudes toward these issues. On one hand, the Trump administration recently tried and failed to put a 10-year moratorium on passing new state-level AI regulations. On the other hand, Chinese officials, including even Xi Jinping himself, are increasingly speaking out about the importance of putting guardrails on AI. Beijing has also been busy drafting domestic standards and rules for the technology, some of which are already in effect. As Trump goes rogue with unorthodox and inconsistent policies, the Chinese government increasingly looks like the adult in the room. With its new AI action plan, Beijing is trying to seize the moment and send the world a message: If you want leadership on this world-changing innovation, look here. Charm Offensive I don't know how effective China's charm offensive will be in the end, but the global retreat of the US does feel like a once-in-a-century opportunity for Beijing to spread its influence, especially at a moment when every country is looking for role models to help them make sense of AI risks and the best ways to manage them. But there's one thing I'm not sure about: How eager will China's domestic AI industry be to embrace this heightened focus on safety? While the Chinese government and academic circles have significantly ramped up their AI safety efforts, industry has so far seemed less enthusiastic—just like in the West. Chinese AI labs disclose less information about their AI safety efforts than their Western counterparts do, according to a recent report published by Concordia AI. Of the 13 frontier AI developers in China the report analyzed, only three produced details about safety assessments in their research publications. Will told me that several tech entrepreneurs he spoke to at WAIC said they were worried about AI risks such as hallucination, model bias, and criminal misuse. But when it came to AGI, many seemed optimistic that the technology will have positive impacts on their life, and they were less concerned about things like job loss or existential risks. Privately, Will says, some entrepreneurs admitted that addressing existential risks isn't as important to them as figuring out how to scale, make money, and beat the competition. But the clear signal from the Chinese government is that companies should be encouraged to tackle AI safety risks, and I wouldn't be surprised if many startups in the country change their tune. Triolo, of DGA-Albright Stonebridge Group, said he expected Chinese frontier research labs to begin publishing more cutting-edge safety work. Some WAIC attendees see China's focus on open source AI as a key part of the picture. 'As Chinese AI companies increasingly open-source powerful AIs, their American counterparts are pressured to do the same,' Bo Peng, a researcher who created the open source large language model RWKV, told WIRED. Peng envisions a future where different nations—including ones that do not always agree—work together on AI. 'A competitive landscape of multiple powerful open-source AIs is in the best interest of AI safety and humanity's future,' he explained. 'Because different AIs naturally embody different values and will keep each other in check.' This is an edition of Zeyi Yang and Louise Matsakis' Made in China newsletter . Read previous newsletters here.
Yahoo
03-06-2025
- Business
- Yahoo
Obama's AI Job Loss Warnings Aren't Accidental, Says David Sacks: They're Fueling A Global Power Grab And 'The Most Orwellian Future Imaginable'
President Donald Trump's artificial intelligence advisor, David Sacks, criticized former President Barack Obama's recent warnings about AI-driven job displacement, characterizing them as part of a coordinated 'influence operation' designed to advance 'Global AI Governance' initiatives. What Happened: In a series of posts on X, Sacks warned Republicans against accepting Obama's 'hyperbolic and unproven claims about AI job loss,' describing them as ammunition for what he termed a 'massive power grab by the bureaucratic state and globalist institutions.' Trending: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — The crypto czar specifically targeted 'Effective Altruist' billionaires with histories of funding left-wing causes and opposing Trump. Sacks responded to Andreessen Horowitz general partner Martin Casado, who praised coverage of Open Philanthropy's alleged astroturfing campaign to regulate AI compute resources. 'There is much much more going on that is either unknown or chronically underdiscussed,' Casado noted, highlighting what he characterized as coordinated efforts to restrict AI development. Sacks emphasized the fundamental ideological divide, stating these actors 'fundamentally believe in empowering government to the maximum.' He warned that 'the single greatest dystopian risk associated with AI is the risk that government uses it to control all of us,' potentially creating an 'Orwellian future where AI is controlled by the government.' Why It Matters: This pivot follows what industry observers call the 'DeepSeek moment,' when China's breakthrough AI model demonstrated significant capabilities, challenging Western assumptions about Chinese AI development. The controversy highlights tensions between rapid AI advancement and governance frameworks. Hedge fund manager Paul Tudor Jones recently warned that leading AI modelers believe there's a 10% chance AI could 'kill 50% of humanity' within 20 years, yet security spending remains minimal compared to $250 billion in development investments by major tech companies. Sacks concluded his analysis by warning that 'WokeAI + Global AI Governance = the most Orwellian future imaginable,' positioning this combination as the ultimate goal of Effective Altruist organizations seeking expanded regulatory control over AI development and deployment. Read Next: Hasbro, MGM, and Skechers trust this AI marketing firm — Invest before it's too late. 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. You can invest today for just $0.30/share with a $1000 minimum. Photo Courtesy: Tapati Rinchumrus on To MSN: Send to MSN Up Next: Transform your trading with Benzinga Edge's one-of-a-kind market trade ideas and tools. Click now to access unique insights that can set you ahead in today's competitive market. Get the latest stock analysis from Benzinga? This article Obama's AI Job Loss Warnings Aren't Accidental, Says David Sacks: They're Fueling A Global Power Grab And 'The Most Orwellian Future Imaginable' originally appeared on Sign in to access your portfolio