logo
China Is Taking AI Safety Seriously. So Must the U.S.

China Is Taking AI Safety Seriously. So Must the U.S.

'China doesn't care about AI safety—so why should we?' This flawed logic pervades U.S. policy and tech circles, offering cover for a reckless race to the bottom as Washington rushes to outpace Beijing in AI development.
According to this rationale, regulating AI would risk falling behind in the so-called 'AI arms race.' And since China supposedly doesn't prioritize safety, racing ahead—even recklessly—is the safer long-term bet. This narrative is not just wrong; it's dangerous.
Ironically, Chinese leaders may have a lesson for the U.S.'s AI boosters: true speed requires control. As China's top tech official, Ding Xuexiang, put it bluntly at Davos in January 2025: 'If the braking system isn't under control, you can't step on the accelerator with confidence.' For Chinese leaders, safety isn't a constraint; it's a prerequisite.
AI safety has become a political priority in China. In April, President Xi Jinping chaired a rare Politburo study session on AI warning of 'unprecedented' risks. China's National Emergency Response Plan now lists AI safety alongside pandemics and cyberattacks. Regulators require pre-deployment safety assessments for generative AI and recently removed over 3,500 non-compliant AI products from the market. In just the first half of this year, China has issued more national AI standards than in the previous three years combined. Meanwhile, the volume of technical papers focused on frontier AI safety has more than doubled over the past year in China.
But the last time U.S. and Chinese leaders met to discuss AI's risks was in May 2024. In September, officials from both nations hinted at a second round of conversations 'at an appropriate time.' But no meeting took place under the Biden Administration, and there is even greater uncertainty over whether the Trump Administration will pick up the baton. This is a missed opportunity.
Read More: The Politics, and Geopolitics, of Artificial Intelligence
China is open to collaboration. In May 2025, it launched a bilateral AI dialogue with the United Kingdom. Esteemed Chinese scientists have contributed to major international efforts, such as the International AI Safety Report backed by 33 countries and intergovernmental organisations (including the U.S. and China) and The Singapore Consensus on Global AI Safety Research Priorities.
A necessary first step is to revive the dormant U.S.–China dialogue on AI risks. Without a functioning government-to-government channel, prospects for coordination remain slim. China indicated it was open to continuing the conversation at the end of the Biden Administration. It already yielded a modest but symbolically important agreement: both sides affirmed that human decision-making must remain in control of nuclear weapons. This channel has potential for further progress.
Going forward, discussions should focus on shared, high-stakes threats. Consider OpenAI's recent classification of its latest ChatGPT Agent as having crossed the 'High Capability' threshold in the biological domain under the company's own Preparedness Framework. This means the agent could, at least in principle, provide users with meaningful guidance that might facilitate the creation of dangerous biological threats. Both Washington and Beijing have a vital interest in preventing non-state actors from weaponizing such tools. An AI-assisted biological attack would not respect national borders. In addition, leading experts and Turing Award winners from the West and China share concerns that advanced general-purpose AI systems may come to operate outside of human control, posing catastrophic and existential risks.
Both governments have already acknowledged some of these risks. President Trump's AI Action Plan warns that AI may 'pose novel national security risks in the near future,' specifically in cybersecurity and in chemical, biological, radiological, and nuclear (CBRN) domains. Similarly, in September last year, China's primary AI security standards body highlighted the need for AI safety standards addressing cybersecurity, CBRN, and loss of control risks.
From there, the two sides could take practical steps to build technical trust between leading standards organizations—such as China's National Information Security Standardization Technical Committee (TC260) and the America's National Institute of Standards and Technology (NIST)
Plus, industry authorities, such as the AI Industry Alliance of China (AIIA) and the Frontier Model Forum in the US, could share best practices on risk management frameworks. AIIA has formulated 'Safety Commitments' which most leading Chinese developers have signed. A new Chinese risk management framework, focused fully on frontier risks including cyber misuse, biological misuse, large-scale persuasion and manipulation, and loss of control scenarios, was published during the World AI Conference (WAIC) and can help both countries align.
Read More: The U.S. Can't Afford to Lose the Biotech Race with China
As trust deepens, governments and leading labs could begin sharing safety evaluation methods and results for the most advanced models. The Global AI Governance Action Plan, unveiled at WAIC, explicitly calls for the creation of 'mutually recognized safety evaluation platforms.' As an Anthropic co-founder said, a recent Chinese AI safety evaluation report has similar findings with the West: frontier AI systems pose some non-trivial CBRN risks, and are beginning to show early warning signs of autonomous self-replication and deception. A shared understanding of model vulnerabilities—and of how those vulnerabilities are being tested—would lay the groundwork for broader safety cooperation.
Finally, the two sides could establish incident-reporting channels and emergency response protocols. In the event of an AI-related accident or misuse, rapid and transparent communication will be essential. A modern equivalent to 'hotlines' between top AI officials in both countries could ensure real-time alerts when models breach safety thresholds or behave unexpectedly. In April, President Xi Jinping explicitly stressed the need for 'monitoring, early risk warning and emergency response' in AI. After any dangerous incident, there should be a pre-agreed upon plan for how to react.
Engagement won't be easy—political and technical hurdles are inevitable. But AI risks are global—and so must be the governance response. Rather than using China as a justification for domestic inaction on AI regulation, American policymakers and industry leaders should engage directly. AI risks won't wait.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I was struggling with GPT-5's new Thinking mode — these 6 tweaks boosted my results
I was struggling with GPT-5's new Thinking mode — these 6 tweaks boosted my results

Tom's Guide

time18 minutes ago

  • Tom's Guide

I was struggling with GPT-5's new Thinking mode — these 6 tweaks boosted my results

GPT-5 has brought with it a bunch of changes. But depending on whom you ask, these upgrades are either game-changing or a complete and utter flop. After all, there has been a GPT-5 backlash. Despite the raging war between ChatGPT fans, there is one feature that I think everyone can agree on: deep research is a game changer. GPT-5, according to OpenAI, sees a major boost to research with GPT-5 thinking mode. Not only is it smarter, but it's also more efficient, spending less time researching for just as good, if not better, responses. These days, you can be pretty specific with what you ask a chatbot, and can give it a huge number of tasks all at once. If you're new to ChatGPT, or trying to wrap your head around how best to use GPT-5, here are some tips to get started on the model's thinking Mode. There are two versions of GPT-5 thinking. Which one you choose depends on how much information you need and how long you want to wait. GPT-5 thinking is useful when you want as much information as possible, and you want the model to be absolutely correct in its decision. It takes time to search the internet, look through sources, and, where needed, use other tools to support its response. So why not use this mode every time? Taking the time to think deeply about an answer can be time-consuming. I've had the model take 10-15 minutes plus to work through a prompt to get the correct answer, and if you're asking a fairly simple question, all of that effort isn't needed. This is where GPT-5 thinking mini comes in. OpenAI describes it as a model that 'thinks quickly'. In other words, it will put in some thought, searching sources and contemplating its response, but it is trying to do it on a quicker deadline. It might not be as detailed, but it will be faster. One of the big updates that was brought in with GPT-5 was the new Auto mode. When this is used, ChatGPT will decide on its own which model is best based on your prompt. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. This can be useful day-to-day when you're just asking ChatGPT random questions, but it doesn't always accurately make the switch. If you know you want to do some deep research, make sure to choose on of the thinking modes available. Your initial prompt is the best time to layout of all of your parameters. You need to obviously make your request itself, but here you can help to get a better response by laying out some rules. State the end result that you are hoping to receive and give ChatGPT a clear goal. If you simply ask for a report on the state of AI, it will make its own decisions on what that includes. Expanding on this can be helpful in better hitting your goal. Explaining what the report is for and what you want it to include will drastically improve your end result. You can also ask it to provide you with a step-by-step of its plan before proceeding. This allows you to see what it is about to do and make changes if you're unhappy with its planned route. Sometimes, it can be helpful to include prompts that may seem counterintuitive. For example, asking ChatGPT to include a small 250 word argument attempting to disprove a research project. I have found that, when trying to use the thinking mode to learn a new subject or digest a lot of information, it can be useful to have it provide a short creative writing explanation of the subject. Before prompting, think what you need to know and what the best way to understand that might be. Equally, don't be afraid to take advantage of the model's coding and image generation abilities. These can be used to provide visual explanations for your prompt. Yes, you're using a mode called thinking, but sometimes ChatGPT doesn't think enough. It can be useful to give it small nudges in your initial prompt to help guide its actions. For example, saying, 'Think hard about this' or, 'Once you have come to a conclusion, reflect on your answer before responding." It might feel like weird things to say, but it helps highlight to ChatGPT that you're prioritizing a detailed and accurate response over something quick. It can be a good idea to split your requests into stages that might cause complications. For example, if you want a research report and a coded website that shows the information in the report, ask for the report first, and then for ChatGPT to code a website to display everything it has researched after. Normally, if you take the time with your first prompt, it should be pretty close. This final stage can be to ask follow up questions or correct the model if it has misunderstood something that you asked. Don't be afraid to follow up with ChatGPT. If your first prompt doesn't yield the response you wanted, ask for changes until you're satisfied with the results. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

Trump says no imminent plans to penalize China for buying Russian oil
Trump says no imminent plans to penalize China for buying Russian oil

CNBC

time19 minutes ago

  • CNBC

Trump says no imminent plans to penalize China for buying Russian oil

U.S. President Donald Trump said on Friday he did not immediately need to consider retaliatory tariffs on countries such as China for buying Russian oil but might have to "in two or three weeks." Trump has threatened sanctions on Moscow and secondary sanctions on countries that buy its oil if no moves are made to end the war in Ukraine. China and India are the top two buyers of Russian oil. The president last week imposed an additional 25% tariff on Indian goods, citing its continued imports of Russian oil. However, Trump has not taken similar action against China. He was asked by Fox News' Sean Hannity if he was now considering such action against Beijing after he and Russian President Vladimir Putin failed to produce an agreement to resolve or pause Moscow's war in Ukraine. "Well, because of what happened today, I think I don't have to think about that," Trump said after his summit with Putin in Alaska. "Now, I may have to think about it in two weeks or three weeks or something, but we don't have to think about that right now. I think, you know, the meeting went very well." Chinese President Xi Jinping's slowing economy will suffer if Trump follows through on a promise to ramp up Russia-related sanctions and tariffs. Xi and Trump are working on a trade deal that could lower tensions - and import taxes - between the world's two biggest economies. But China could be the biggest remaining target, outside of Russia, if Trump ramps up punitive measures.

OpenAI employees to sell $6B of shares to SoftBank, others, Bloomberg says
OpenAI employees to sell $6B of shares to SoftBank, others, Bloomberg says

Business Insider

timean hour ago

  • Business Insider

OpenAI employees to sell $6B of shares to SoftBank, others, Bloomberg says

Current and former staff of Microsoft-backed (MSFT) OpenAI intend to sell roughly $6B worth of stock to an investor group comprised of Thrive Capital, Dragoneer Investment, and SoftBank (SFTBY), in a deal that values OpenAI at $500B, Bloomberg's Kate Clark reports, citing people familiar with the matter. The discussions are early an the size of the sale could potentially change, the author notes. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Published first on TheFly – the ultimate source for real-time, market-moving breaking financial news. Try Now>>

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store