Latest news with #InternationalAISafetyReport


Time Magazine
3 days ago
- Business
- Time Magazine
China Is Taking AI Safety Seriously. So Must the U.S.
'China doesn't care about AI safety—so why should we?' This flawed logic pervades U.S. policy and tech circles, offering cover for a reckless race to the bottom as Washington rushes to outpace Beijing in AI development. According to this rationale, regulating AI would risk falling behind in the so-called 'AI arms race.' And since China supposedly doesn't prioritize safety, racing ahead—even recklessly—is the safer long-term bet. This narrative is not just wrong; it's dangerous. Ironically, Chinese leaders may have a lesson for the U.S.'s AI boosters: true speed requires control. As China's top tech official, Ding Xuexiang, put it bluntly at Davos in January 2025: 'If the braking system isn't under control, you can't step on the accelerator with confidence.' For Chinese leaders, safety isn't a constraint; it's a prerequisite. AI safety has become a political priority in China. In April, President Xi Jinping chaired a rare Politburo study session on AI warning of 'unprecedented' risks. China's National Emergency Response Plan now lists AI safety alongside pandemics and cyberattacks. Regulators require pre-deployment safety assessments for generative AI and recently removed over 3,500 non-compliant AI products from the market. In just the first half of this year, China has issued more national AI standards than in the previous three years combined. Meanwhile, the volume of technical papers focused on frontier AI safety has more than doubled over the past year in China. But the last time U.S. and Chinese leaders met to discuss AI's risks was in May 2024. In September, officials from both nations hinted at a second round of conversations 'at an appropriate time.' But no meeting took place under the Biden Administration, and there is even greater uncertainty over whether the Trump Administration will pick up the baton. This is a missed opportunity. Read More: The Politics, and Geopolitics, of Artificial Intelligence China is open to collaboration. In May 2025, it launched a bilateral AI dialogue with the United Kingdom. Esteemed Chinese scientists have contributed to major international efforts, such as the International AI Safety Report backed by 33 countries and intergovernmental organisations (including the U.S. and China) and The Singapore Consensus on Global AI Safety Research Priorities. A necessary first step is to revive the dormant U.S.–China dialogue on AI risks. Without a functioning government-to-government channel, prospects for coordination remain slim. China indicated it was open to continuing the conversation at the end of the Biden Administration. It already yielded a modest but symbolically important agreement: both sides affirmed that human decision-making must remain in control of nuclear weapons. This channel has potential for further progress. Going forward, discussions should focus on shared, high-stakes threats. Consider OpenAI's recent classification of its latest ChatGPT Agent as having crossed the 'High Capability' threshold in the biological domain under the company's own Preparedness Framework. This means the agent could, at least in principle, provide users with meaningful guidance that might facilitate the creation of dangerous biological threats. Both Washington and Beijing have a vital interest in preventing non-state actors from weaponizing such tools. An AI-assisted biological attack would not respect national borders. In addition, leading experts and Turing Award winners from the West and China share concerns that advanced general-purpose AI systems may come to operate outside of human control, posing catastrophic and existential risks. Both governments have already acknowledged some of these risks. President Trump's AI Action Plan warns that AI may 'pose novel national security risks in the near future,' specifically in cybersecurity and in chemical, biological, radiological, and nuclear (CBRN) domains. Similarly, in September last year, China's primary AI security standards body highlighted the need for AI safety standards addressing cybersecurity, CBRN, and loss of control risks. From there, the two sides could take practical steps to build technical trust between leading standards organizations—such as China's National Information Security Standardization Technical Committee (TC260) and the America's National Institute of Standards and Technology (NIST) Plus, industry authorities, such as the AI Industry Alliance of China (AIIA) and the Frontier Model Forum in the US, could share best practices on risk management frameworks. AIIA has formulated 'Safety Commitments' which most leading Chinese developers have signed. A new Chinese risk management framework, focused fully on frontier risks including cyber misuse, biological misuse, large-scale persuasion and manipulation, and loss of control scenarios, was published during the World AI Conference (WAIC) and can help both countries align. Read More: The U.S. Can't Afford to Lose the Biotech Race with China As trust deepens, governments and leading labs could begin sharing safety evaluation methods and results for the most advanced models. The Global AI Governance Action Plan, unveiled at WAIC, explicitly calls for the creation of 'mutually recognized safety evaluation platforms.' As an Anthropic co-founder said, a recent Chinese AI safety evaluation report has similar findings with the West: frontier AI systems pose some non-trivial CBRN risks, and are beginning to show early warning signs of autonomous self-replication and deception. A shared understanding of model vulnerabilities—and of how those vulnerabilities are being tested—would lay the groundwork for broader safety cooperation. Finally, the two sides could establish incident-reporting channels and emergency response protocols. In the event of an AI-related accident or misuse, rapid and transparent communication will be essential. A modern equivalent to 'hotlines' between top AI officials in both countries could ensure real-time alerts when models breach safety thresholds or behave unexpectedly. In April, President Xi Jinping explicitly stressed the need for 'monitoring, early risk warning and emergency response' in AI. After any dangerous incident, there should be a pre-agreed upon plan for how to react. Engagement won't be easy—political and technical hurdles are inevitable. But AI risks are global—and so must be the governance response. Rather than using China as a justification for domestic inaction on AI regulation, American policymakers and industry leaders should engage directly. AI risks won't wait.


Sky News
05-02-2025
- Business
- Sky News
'Godfather' of AI warns arms race risks amplifying dangers of 'superhuman' systems
Why you can trust Sky News An arms race for artificial intelligence (AI) supremacy, triggered by recent panic over Chinese chatbot DeepSeek, risks amplifying the existential dangers of superintelligence, according to one of the "godfathers" of AI. Canadian machine learning pioneer Yoshua Bengio, author of the first International AI Safety Report to be presented at an international AI summit in Paris next week, warns unchecked investment in computational power for AI without oversight is dangerous. "The effort is going into who's going to win the race, rather than how do we make sure we are not going to build something that blows up in our face," Mr Bengio says. He warns that military and economic races "result in cutting corners on ethics, cutting corners on responsibility and on safety. It's unavoidable". Mr Bengio worked on neural networks and machine learning, the software architecture that underpins modern AI models. He is in London, along with other AI pioneers to receive the Queen Elizabeth Prize, UK engineering's most prestigious award in recognition of AI and its potential. He's enthusiastic about its benefits for society, but the pivot away from AI regulation by Donald Trump 's White House and frantic competition among big tech companies for more powerful AI models is a worrying shift. 'Superhuman systems becoming more powerful' "We are building systems that are more and more powerful; becoming superhuman in some dimensions," he says. "As these systems become more powerful, they also become extraordinarily more valuable, economically speaking. "So the magnitude of, 'wow, this is going to make me a lot of money' is motivating a lot of people. And of course, when you want to sell products, you don't want to talk about the risks." But not all the "godfathers" of AI are so concerned. Take Yann LeCun, Meta's chief AI scientist, also in London to share in the QE prize. "We have been deluded into thinking that large language models are intelligent, but really, they're not," he says. "We don't have machines that are nearly as smart as a house cat, in terms of understanding the physical world." Within three to five years, Mr LeCun predicts, AI will have some aspects of human-level intelligence. Robots, for example, that can perform tasks they've not been programmed or trained to do. But, he argues, rather than make the world less safe, the DeepSeek drama - where a Chinese company developed an AI to rival the best of America's big tech with a tenth of the computing power - demonstrates no one will dominate for long. "If the US decides to clam up when it comes to AI for geopolitical reasons, or, commercial reasons, then you'll have innovation someplace else in the world. DeepSeek showed that," he says. The Royal Academy of Engineering prize is awarded each year to engineers whose discoveries have, or promise to have, the greatest impact on the world. Previous recipients include the pioneers of photovoltaic cells in solar panels, wind turbine technology and neodymium magnets found in hard drives, and electric motors. Science minister Lord Vallance, who chairs the QE prize foundation, says he is alert to the potential risks of AI. Organisations such as the UK's new AI Safety Institute are designed to foresee and prevent the potential harms AI "human-like" intelligence might bring. But he is less concerned about one nation or company having a monopoly on AI. "I think what we've seen in the last few weeks is it's much more likely that we're going to have many companies in this space, and the idea of single-point dominance is rather unlikely," he says.


Sky News
05-02-2025
- Business
- Sky News
AI arms race 'risks amplifying existential dangers of superintelligence'
An arms race for artificial intelligence (AI) supremacy, triggered by recent panic over Chinese chatbot DeepSeek, risks amplifying the existential dangers of superintelligence, according to one of the "godfathers" of AI. Canadian machine learning pioneer Yoshua Bengio, author of the first International AI Safety Report to be presented at an international AI summit in Paris next week, warns unchecked investment in computational power for AI without oversight is dangerous. "The effort is going into who's going to win the race, rather than how do we make sure we are not going to build something that blows up in our face," Mr Bengio says. He warns that military and economic races "result in cutting corners on ethics, cutting corners on responsibility and on safety. It's unavoidable". Mr Bengio worked on neural networks and machine learning, the software architecture that underpins modern AI models. He is in London, along with other AI pioneers to receive the Queen Elizabeth Prize, UK engineering's most prestigious award in recognition of AI and its potential. He's enthusiastic about its benefits for society, but the pivot away from AI regulation by Donald Trump 's White House and frantic competition among big tech companies for more powerful AI models is a worrying shift. 'Superhuman systems becoming more powerful' "We are building systems that are more and more powerful; becoming superhuman in some dimensions," he says. "As these systems become more powerful, they also become extraordinarily more valuable, economically speaking. "So the magnitude of, 'wow, this is going to make me a lot of money' is motivating a lot of people. And of course, when you want to sell products, you don't want to talk about the risks." But not all the "godfathers" of AI are so concerned. Take Yann LeCun, Meta's chief AI scientist, also in London to share in the QE prize. "We have been deluded into thinking that large language models are intelligent, but really, they're not," he says. "We don't have machines that are nearly as smart as a house cat, in terms of understanding the physical world." Within three to five years, Mr LeCun predicts, AI will have some aspects of human-level intelligence. Robots, for example, that can perform tasks they've not been programmed or trained to do. But, he argues, rather than make the world less safe, the DeepSeek drama - where a Chinese company developed an AI to rival the best of America's big tech with a tenth of the computing power - demonstrates no one will dominate for long. "If the US decides to clam up when it comes to AI for geopolitical reasons, or, commercial reasons, then you'll have innovation someplace else in the world. DeepSeek showed that," he says. The Royal Academy of Engineering prize is awarded each year to engineers whose discoveries have, or promise to have, the greatest impact on the world. Previous recipients include the pioneers of photovoltaic cells in solar panels, wind turbine technology and neodymium magnets found in hard drives, and electric motors. Science minister Lord Vallance, who chairs the QE prize foundation, says he is alert to the potential risks of AI. Organisations such as the UK's new AI Safety Institute are designed to foresee and prevent the potential harms AI "human-like" intelligence might bring. But he is less concerned about one nation or company having a monopoly on AI. "I think what we've seen in the last few weeks is it's much more likely that we're going to have many companies in this space, and the idea of single-point dominance is rather unlikely," he says.


Sky News
05-02-2025
- Business
- Sky News
AI arms race 'risks amplifying the existential dangers of superintelligence'
An arms race for artificial intelligence (AI) supremacy, triggered by recent panic over Chinese chatbot DeepSeek, risks amplifying the existential dangers of superintelligence, according to one of the "godfathers" of AI. Canadian machine learning pioneer Yoshua Bengio, author of the first International AI Safety Report to be presented at an international AI summit in Paris next week, warns unchecked investment in computational power for AI without oversight is dangerous. "The effort is going into who's going to win the race, rather than how do we make sure we are not going to build something that blows up in our face," says Mr Bengio. Military and economic races, he warns, "result in cutting corners on ethics, cutting corners on responsibility and on safety. It's unavoidable". Bengio worked on neural networks and machine learning, the software architecture that underpins modern AI models. He is in London, along with other AI pioneers to receive the Queen Elizabeth Prize, UK engineering's most prestigious award in recognition of AI and its potential. He's enthusiastic about its benefits for society, but the pivot away from AI regulation by Donald Trump's White House and frantic competition among big tech companies for more powerful AI models is a worrying shift. "We are building systems that are more and more powerful; becoming superhuman in some dimensions," he says. "As these systems become more powerful, they also become extraordinarily more valuable, economically speaking. "So the magnitude of, 'wow, this is going to make me a lot of money' is motivating a lot of people. And of course, when you want to sell products, you don't want to talk about the risks." But not all the "godfathers" of AI are so concerned. Take Yann LeCun, Meta's chief AI scientist, also in London to share in the QE prize. "We have been deluded into thinking that large language models are intelligent, but really, they're not," he says. "We don't have machines that are nearly as smart as a house cat, in terms of understanding the physical world." Within three to five years, LeCun predicts, AI will have some aspects of human level intelligence. Robots, for example, that can perform tasks they've not been programmed or trained to do. But, he argues, rather than make the world less safe, the DeepSeek drama - where a Chinese company developed an AI to rival the best of America's big tech with a tenth of the computing power - demonstrates no one will dominate for long. "If the US decides to clam up when it comes to AI for geopolitical reasons, or, commercial reasons, then you'll have innovation someplace else in the world. DeepSeek showed that," he says. The Royal Academy of Engineering prize is awarded each year to engineers whose discoveries have, or promise to have, the greatest impact on the world. Previous recipients include the pioneers of photovoltaic cells in solar panels, wind turbine technology and neodymium magnets found in hard drives, and electric motors. Science minister Lord Vallance, who chairs the QE prize foundation, says he is alert to the potential risks of AI. Organisations like the UK's new AI Safety Institute are designed to foresee and prevent the potential harms AI "human-like" intelligence might bring. But he is less concerned about one nation or company having a monopoly on AI. "I think what we've seen in the last few weeks is it's much more likely that we're going to have many companies in this space, and the idea of single-point dominance is rather unlikely," he says.
Yahoo
05-02-2025
- Business
- Yahoo
AI arms race 'risks amplifying the existential dangers of superintelligence'
An arms race for artificial intelligence (AI) supremacy, triggered by recent panic over Chinese chatbot DeepSeek, risks amplifying the existential dangers of superintelligence, according to one of the "godfathers" of AI. Canadian machine learning pioneer Yoshua Bengio, author of the first International AI Safety Report to be presented at an international AI summit in Paris next week, warns unchecked investment in computational power for AI without oversight is dangerous. "The effort is going into who's going to win the race, rather than how do we make sure we are not going to build something that blows up in our face," says Mr Bengio. Military and economic races, he warns, "result in cutting corners on ethics, cutting corners on responsibility and on safety. It's unavoidable". Bengio worked on neural networks and machine learning, the software architecture that underpins modern AI models. He is in London, along with other AI pioneers to receive the Queen Elizabeth Prize, UK engineering's most prestigious award in recognition of AI and its potential. He's enthusiastic about its benefits for society, but the pivot away from AI regulation by Donald Trump's White House and frantic competition among big tech companies for more powerful AI models is a worrying shift. "We are building systems that are more and more powerful; becoming superhuman in some dimensions," he says. "As these systems become more powerful, they also become extraordinarily more valuable, economically speaking. "So the magnitude of, 'wow, this is going to make me a lot of money' is motivating a lot of people. And of course, when you want to sell products, you don't want to talk about the risks." But not all the "godfathers" of AI are so concerned. Take Yann LeCun, Meta's chief AI scientist, also in London to share in the QE prize. "We have been deluded into thinking that large language models are intelligent, but really, they're not," he says. "We don't have machines that are nearly as smart as a house cat, in terms of understanding the physical world." Within three to five years, LeCun predicts, AI will have some aspects of human level intelligence. Robots, for example, that can perform tasks they've not been programmed or trained to do. Read more: But, he argues, rather than make the world less safe, the DeepSeek drama - where a Chinese company developed an AI to rival the best of America's big tech with a tenth of the computing power - demonstrates no one will dominate for long. "If the US decides to clam up when it comes to AI for geopolitical reasons, or, commercial reasons, then you'll have innovation someplace else in the world. DeepSeek showed that," he says. The Royal Academy of Engineering prize is awarded each year to engineers whose discoveries have, or promise to have, the greatest impact on the world. Previous recipients include the pioneers of photovoltaic cells in solar panels, wind turbine technology and neodymium magnets found in hard drives, and electric motors. Science minister Lord Vallance, who chairs the QE prize foundation, says he is alert to the potential risks of AI. Organisations like the UK's new AI Safety Institute are designed to foresee and prevent the potential harms AI "human-like" intelligence might bring. But he is less concerned about one nation or company having a monopoly on AI. "I think what we've seen in the last few weeks is it's much more likely that we're going to have many companies in this space, and the idea of single-point dominance is rather unlikely," he says.