
What International AI Safety report says on jobs, climate, cyberwar and more
The document, commissioned after the 2023 global AI safety summit, covers numerous threats from deepfakes to aiding cyberattacks and the use of biological weapons, as well as the impact on jobs and the environment.
Here are some of the key points from the report chaired by Yoshua Bengio, a world-leading computer scientist.
In a section on 'labour market risks', the report warns that the impact on jobs will 'likely be profound', particularly if AI agents – tools that can carry out tasks without human intervention – become highly capable.
'General-purpose AI, especially if it continues to advance rapidly, has the potential to automate a very wide range of tasks, which could have a significant effect on the labour market. This means that many people could lose their current jobs,' said the report.
The report added that many economists believe job losses could be offset by the creation of new jobs or demand from sectors not touched by automation.
According to the International Monetary Fund, about 60% of jobs in advanced economies such as the US and UK are exposed to AI and half of these jobs may be negatively affected. The Tony Blair Institute has said AI could displace up to 3m private-sector jobs in the UK, though the ultimate rise in unemployment will be in the low hundreds of thousands because growth in the technology will create new roles in an AI-transformed economy.
'These disruptions could be particularly severe if autonomous AI agents become capable of completing longer sequences of tasks without human supervision,' the report says.
It adds that some experts have pointed to scenarios where work is 'largely' eliminated. In 2o23 Elon Musk, the world's richest person, told the former UK prime minister, Rishi Sunak, that AI could ultimately replace all human jobs. However, the report said such views were controversial and there was 'considerable uncertainty' over how AI might impact labour markets.
The report described AI's impact on the environment as a 'moderate but rapidly growing contributor' as datacentres – the central nervous systems of AI models – consume electricity to train and operate the technology.
Datacentres and data transmission account for about 1% of energy-related greenhouse gas emissions, said the report, with AI constituting up to 28% of datacentre energy consumption.
It added that models are using more energy as they become more advanced and warned that a 'significant portion' of global model training relies on high-carbon energy sources such as coal or natural gas. Use of renewable energy by AI firms and improvements in efficient have not kept pace with rising demand for energy, said the report, which also points to tech firms admitting that AI development is harming their ability to meet environmental targets.
The report also warns that water consumption by AI, used for cooling equipment in datacentres, could pose a 'substantial threat to the environment and the human right to water'. However, the report adds that there is a shortage of data about the environmental impact of AI.
An all-powerful AI system evading human control is the central concern of experts who fear the technology could extinguish humanity. The report acknowledges those fears but says opinion varies 'greatly'.
'Some consider it implausible, some consider it likely to occur, and some see it as a modest-likelihood risk that warrants attention due to its high severity,' it says.
Bengio told the Guardian that AI agents, which carry out tasks autonomously, are still being developed and so far are unable to carry out the long-term planning necessary for those systems to eradicate jobs wholesale – or evade safety guidelines. 'If an AI cannot plan over a long horizon, it's hardly going to be able to escape our control,' he said
The report states that new models can create step-by-step guides to creating pathogens and toxins that surpass PhD-level expertise. However, it cautions that there is uncertainty over whether they can be used by novices.
There is evidence of advancement since an interim safety report last year, the experts say, with OpenAI producing a model that could 'meaningfully assist experts in the operational planning of reproducing known biological threats'.
A fast-growing threat from AI in terms of cyber-espionage is autonomous bots being able to find vulnerabilities in open-source software, the term for code that is free to download and adapt. However, relative shortcomings in AI agents mean the technology is not able to plan and carry out attacks autonomously.
The report lists an array of known examples of AI deepfakes being used maliciously, including tricking companies into handing over money and creating pornographic images of people. However, the report says there is not enough data to fully measure the amount of deepfake incidents.
'Reluctance to report may be contributing to these challenges in understanding the full impact of AI-generated content intended to harm individuals,' the report says. For example, institutions often hesitate to disclose their struggles with AI-powered fraud. Similarly, individuals attacked with AI-generated compromising material about themselves may stay silent out of embarrassment and to avoid further harm.'
The report also warns that there are 'fundamental challenges' to tackling deepfake content, such as the ability to remove digital watermarks that flag AI-generated content.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
28 minutes ago
- Daily Mail
Another blow for Elon Musk as his $150 billion ticket to Mars crashes
Elon Musk 's Starlink internet is down, impacting users across America. Downdetector, a site that monitors online outages, shows issues hit the SpaceX internet around 1:30pm ET. Users cited sluggish connections, intermittent service and in many cases, total blackouts. Musk has previously stated that Starlink's profits are 'being used to pay for humanity getting to Mars.' While it's unclear how much revenue the company generates daily, a widespread service disruption could pose a setback to those lofty interplanetary ambitions. Some analysts project that Starlink could be valued at around $150 billion, considering factors such as projected revenues and market conditions, but the company has not made an official announcement. This is a developing story... More updates to come.


The Guardian
an hour ago
- The Guardian
The Guardian view on Britain's AI strategy: the risk is that it is dependency dressed up in digital hype
There was a time when Britain aspired to be a leader in technology. These days, it seems content to be a willing supplicant – handing over its data, infrastructure and public services to US tech giants in exchange for the promise of a few percentage points of efficiency gains. Worryingly, the artificial intelligence strategy of Sir Keir Starmer's government appears long on rhetoric, short on sovereignty and built on techno-utopian assumptions. Last week Peter Kyle, the technology secretary, was promoting the use of AI-generated discharge letters in the NHS. The tech, he said, will process complex conversations between doctors and patients, slashing paperwork and streamlining services. Ministers say that by applying AI across the public sector, the government can save £45bn. But step back and a more familiar pattern emerges. As Cecilia Rikap, a researcher at University College London, told the Politics Theory Other podcast, Britain risks becoming a satellite of the US tech industry – a nation whose public infrastructure serves primarily as a testing ground and data source for American AI models hosted on US-owned cloud computing networks. She warned that the UK should not become a site of 'extractivism', in which value – whether in the form of knowledge, labour or electricity – is supplied by Britain but monetised in the US. It's not just that the UK lacks a domestic cloud ecosystem. It's that the government's strategy does nothing to build one. The concern is that public data, much of it drawn from the NHS and local authorities, will be shovelled into models built and trained abroad. The value captured from that data – whether in the form of model refinement or product development – will accrue not to the British public, but to US shareholders. Even the promise of job creation appears shaky. Datacentres, the physical backbone of AI, are capital-intensive, energy-hungry, and each one employs only about 50 people. Meanwhile, Daron Acemoglu, the MIT economist and Nobel laureate, offers a still more sobering view: far from ushering in a golden age of labour augmentation, today's AI rollout is geared almost entirely toward labour displacement. Prof Acemoglu sees a fork: AI can empower workers – or replace them. Right now, it is doing the latter. Ministerial pledges of productivity gains may just mean fewer jobs – not better services. The deeper problem is one of imagination. A government serious about digital sovereignty might build a public cloud, fund open-source AI models and create institutions capable of steering technological development toward social ends. Instead, we are offered efficiency-by-outsourcing – an AI strategy where Britain provides the inputs and America reaps the returns. In a 2024 paper, Prof Acemoglu challenged Goldman Sachs' 10-year forecast that AI would lead to global growth of 7% – about $7tn – and estimated instead under $1tn in gains. Much of this would be captured by US big tech. There's nothing wrong with harnessing new technologies. But their deployment must not be structured in a way that entrenches dependency and hollows out public capacity. The Online Safety Act shows digital sovereignty can enforce national rules on global platforms, notably on porn sites. But current turmoil at the Alan Turing Institute suggests a deeper truth: the UK government is dazzled by American AI and has no clear plan of its own. Britain risks becoming not a tech pioneer, but a well-governed client state in someone else's digital empire. Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.


The Independent
3 hours ago
- The Independent
Cristian Romero ends transfer speculation by signing new Tottenham contract
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging. At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story. The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it. Your support makes all the difference.