logo
#

Latest news with #TheSingaporeConsensusonGlobalAISafetyResearchPriorities

China Is Taking AI Safety Seriously. So Must the U.S.
China Is Taking AI Safety Seriously. So Must the U.S.

Time​ Magazine

time4 days ago

  • Business
  • Time​ Magazine

China Is Taking AI Safety Seriously. So Must the U.S.

'China doesn't care about AI safety—so why should we?' This flawed logic pervades U.S. policy and tech circles, offering cover for a reckless race to the bottom as Washington rushes to outpace Beijing in AI development. According to this rationale, regulating AI would risk falling behind in the so-called 'AI arms race.' And since China supposedly doesn't prioritize safety, racing ahead—even recklessly—is the safer long-term bet. This narrative is not just wrong; it's dangerous. Ironically, Chinese leaders may have a lesson for the U.S.'s AI boosters: true speed requires control. As China's top tech official, Ding Xuexiang, put it bluntly at Davos in January 2025: 'If the braking system isn't under control, you can't step on the accelerator with confidence.' For Chinese leaders, safety isn't a constraint; it's a prerequisite. AI safety has become a political priority in China. In April, President Xi Jinping chaired a rare Politburo study session on AI warning of 'unprecedented' risks. China's National Emergency Response Plan now lists AI safety alongside pandemics and cyberattacks. Regulators require pre-deployment safety assessments for generative AI and recently removed over 3,500 non-compliant AI products from the market. In just the first half of this year, China has issued more national AI standards than in the previous three years combined. Meanwhile, the volume of technical papers focused on frontier AI safety has more than doubled over the past year in China. But the last time U.S. and Chinese leaders met to discuss AI's risks was in May 2024. In September, officials from both nations hinted at a second round of conversations 'at an appropriate time.' But no meeting took place under the Biden Administration, and there is even greater uncertainty over whether the Trump Administration will pick up the baton. This is a missed opportunity. Read More: The Politics, and Geopolitics, of Artificial Intelligence China is open to collaboration. In May 2025, it launched a bilateral AI dialogue with the United Kingdom. Esteemed Chinese scientists have contributed to major international efforts, such as the International AI Safety Report backed by 33 countries and intergovernmental organisations (including the U.S. and China) and The Singapore Consensus on Global AI Safety Research Priorities. A necessary first step is to revive the dormant U.S.–China dialogue on AI risks. Without a functioning government-to-government channel, prospects for coordination remain slim. China indicated it was open to continuing the conversation at the end of the Biden Administration. It already yielded a modest but symbolically important agreement: both sides affirmed that human decision-making must remain in control of nuclear weapons. This channel has potential for further progress. Going forward, discussions should focus on shared, high-stakes threats. Consider OpenAI's recent classification of its latest ChatGPT Agent as having crossed the 'High Capability' threshold in the biological domain under the company's own Preparedness Framework. This means the agent could, at least in principle, provide users with meaningful guidance that might facilitate the creation of dangerous biological threats. Both Washington and Beijing have a vital interest in preventing non-state actors from weaponizing such tools. An AI-assisted biological attack would not respect national borders. In addition, leading experts and Turing Award winners from the West and China share concerns that advanced general-purpose AI systems may come to operate outside of human control, posing catastrophic and existential risks. Both governments have already acknowledged some of these risks. President Trump's AI Action Plan warns that AI may 'pose novel national security risks in the near future,' specifically in cybersecurity and in chemical, biological, radiological, and nuclear (CBRN) domains. Similarly, in September last year, China's primary AI security standards body highlighted the need for AI safety standards addressing cybersecurity, CBRN, and loss of control risks. From there, the two sides could take practical steps to build technical trust between leading standards organizations—such as China's National Information Security Standardization Technical Committee (TC260) and the America's National Institute of Standards and Technology (NIST) Plus, industry authorities, such as the AI Industry Alliance of China (AIIA) and the Frontier Model Forum in the US, could share best practices on risk management frameworks. AIIA has formulated 'Safety Commitments' which most leading Chinese developers have signed. A new Chinese risk management framework, focused fully on frontier risks including cyber misuse, biological misuse, large-scale persuasion and manipulation, and loss of control scenarios, was published during the World AI Conference (WAIC) and can help both countries align. Read More: The U.S. Can't Afford to Lose the Biotech Race with China As trust deepens, governments and leading labs could begin sharing safety evaluation methods and results for the most advanced models. The Global AI Governance Action Plan, unveiled at WAIC, explicitly calls for the creation of 'mutually recognized safety evaluation platforms.' As an Anthropic co-founder said, a recent Chinese AI safety evaluation report has similar findings with the West: frontier AI systems pose some non-trivial CBRN risks, and are beginning to show early warning signs of autonomous self-replication and deception. A shared understanding of model vulnerabilities—and of how those vulnerabilities are being tested—would lay the groundwork for broader safety cooperation. Finally, the two sides could establish incident-reporting channels and emergency response protocols. In the event of an AI-related accident or misuse, rapid and transparent communication will be essential. A modern equivalent to 'hotlines' between top AI officials in both countries could ensure real-time alerts when models breach safety thresholds or behave unexpectedly. In April, President Xi Jinping explicitly stressed the need for 'monitoring, early risk warning and emergency response' in AI. After any dangerous incident, there should be a pre-agreed upon plan for how to react. Engagement won't be easy—political and technical hurdles are inevitable. But AI risks are global—and so must be the governance response. Rather than using China as a justification for domestic inaction on AI regulation, American policymakers and industry leaders should engage directly. AI risks won't wait.

Spain moves forward with plans to shorten the 40-hour working week
Spain moves forward with plans to shorten the 40-hour working week

Euronews

time10-05-2025

  • Science
  • Euronews

Spain moves forward with plans to shorten the 40-hour working week

The last global gathering on artificial intelligence (AI) at the Paris AI Action Summit in February saw countries divided, notably after the US and UK refused to sign a joint declaration for AI that is "open, inclusive, transparent, ethical, safe, secure, and trustworthy". AI experts at the time criticised the declaration for not going far enough and being "devoid of any meaning," the reason countries cited for not signing the pact, as opposed to their being against AI safety. The next global AI summit will be held in India next year, but rather than wait until then, Singapore's government held a conference called the International Scientific Exchange on AI Safety on April 26. "Paris [AI Summit] left a misimpression that people don't agree about AI safety," said Max Tegmark, MIT professor and contributor to the Singapore report. "The Singapore government was clever to say yes, there is an agreement,' he told Euronews Next. Representatives from leading AI companies, such as OpenAI, Meta, Google DeepMind, and Anthropic, as well as leaders from 11 countries, including the US, China, and the EU, attended. The result of the conference was published in a paper released on Thursday called 'The Singapore Consensus on Global AI Safety Research Priorities'. The document lists research proposals to ensure that AI does not become dangerous to humanity. It identifies three aspects to promote a safe AI: assessing, developing trustworthiness, and controlling AI systems, which include large language models (LLMs), ​​multimodal models that can work with multiple types of data, often including text, images, video, and lastly, AI agents. The main research that the document argues should be assessed is the development of risk thresholds to determine when intervention is needed, techniques for studying current impacts and forecasting future implications, and methods for rigorous testing and evaluation of AI systems. Some of the key areas of research listed include improving the validity and precision of AI model assessments and finding methods for testing dangerous behaviours, which include scenarios where AI operates outside human control. The paper calls for a definition of boundaries between acceptable and unacceptable behaviours. It also says that when building AI systems, they should be developed with truthful and honest systems and datasets. And once built, these AI systems should be checked to ensure they meet agreed safety standards, such as tests against jailbreaking. The final area the paper advocates for is the control and societal resilience of AI systems. This includes monitoring, kill switches, and non-agentic AI serving as guardrails for agentic systems. It also calls for human-centric oversight frameworks. As for societal resilience, the paper said that infrastructure against AI-enabled disruptions should be strengthened, and it argued that coordination mechanisms for incident responses should be developed. The release of the report comes as the geopolitical race for AI intensifies and AI companies thrash out their latest models to beat their competition. However, Xue Lan, Dean of Tsinghua University, who attended the conference, said: "In an era of geopolitical fragmentation, this comprehensive synthesis of cutting-edge research on AI safety is a promising sign that the global community is coming together with a shared commitment to shaping a safer AI future". Tegmark added that there is a consensus for AI safety between governments and tech firms, as it is in everyone's interest. "OpenAI, Antropic, and all these companies sent people to the Singapore conference; they want to share their safety concerns, and they don't have to share their secret sauce," he said. "Rival governments also don't want nuclear blow-ups in opposing countries, it's not in their interest," he added. Tegmark hopes that before the next AI summit in India, governments will treat AI like any other powerful tech industry, such as biotech, whereby there are safety standards in each country and new drugs are required to pass certain trials. "I'm feeling much more optimistic about the next summit now than after Paris," Tegmark said. Spain may soon move to a shorter week with workers enjoying 2.5 hours more rest after the government on Tuesday approved a bill that would reduce official working hours from 40 hours to 37.5 hours. If enacted, the bill, which will now go through the Spanish parliament, would benefit 12.5 million full-time and part-time private sector workers and is expected to improve productivity and reduce absenteeism, according to the country's Ministry of Labour. "Today, we are modernising the world of labour and helping people to be a little happier," said Labour Minister Yolanda Díaz, who heads the party Sumar that forms part of the current left-wing coalition government. The measure, which already applies to civil servants and some other sectors, would mainly affect retail, manufacturing, hospitality, and construction, Díaz added. Prime Minister Pedro Sánchez's government does not have a clear majority in parliament, where the bill must be approved for it to become law. The main trade unions have expressed support for the proposal, unlike business associations. Sumar, the hard-left minority partner of Sánchez's Socialist Party, proposed the bill. The Catalan nationalist party Junts, an occasional ally of Sánchez's coalition, expressed concern over what it said would be negative consequences for small companies and the self-employed under a shorter working week. The coalition will have to balance the demands of Junts and other smaller parties to get the bill passed. Spain has had a 40-hour workweek since 1983, when it was reduced from 48 hours. In the wake of the COVID-19 pandemic, there have been moves to change working habits with various pilot schemes launched in Spain to potentially introduce a four-day workweek, including a smaller trial in Valencia. The results of the month-long programme suggested that workers had benefited from longer weekends, developing healthier habits such as taking up sports, as well as reducing their stress levels. The European Commission has taken Czechia, Cyprus, Poland, Portugal and Spain to the EU's highest court for failing to correctly apply the Digital Services Act (DSA), it said on Wednesday. The DSA – which aims to protect users against illegal content and products online – entered fully into force in February last year: by then member states had to appoint a national authority tasked with overseeing the rules in their respective countries. Those watchdogs must cooperate with the Commission, which by itself oversees the largest batch of platforms that have more than 45 million users each month. The countries were also required to give their regulators enough means to carry out their tasks as well as to draft rules on penalties for infringements of the DSA. Poland failed to designate and empower its authority to carry out its tasks under the DSA, the Commission's statement said. Czechia, Cyprus, Spain and Portugal – which each designated a watchdog – did not give them the necessary powers to carry out their tasks under the regulation, the Commission found. The EU executive began its infringement procedure by sending letters of formal notice to the five countries in early 2024. None of the countries took the necessary measures in the meantime. In a separate case, the Commission said it stepped up its procedure against Bulgaria for also failing to empower a national regulator under the DSA and for failing to lay down the rules on penalties. If the country does not address the shortcomings in two months, the Commission could also take Bulgaria to court. Since late 2023, when the DSA entered into force for the largest group of online platforms, the Commission began several investigations into potential breaches. None of these probes, including into X, TikTok and Meta, have yet been wrapped up.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store