
Watch Doomsday Clock's latest update on how close planet is to catastrophe
The ' Doomsday Clock ' has been set to 89 seconds to midnight, the closest it has ever been to symbolising global catastrophe.
Unveiled Tuesday (28 January) by the Bulletin of the Atomic Scientists, the clock highlights threats like nuclear disaster, climate change, disinformation, and artificial intelligence.
"We set the clock closer to midnight because we do not see sufficient progress, positive progress on the global challenges we face, including nuclear risk, climate change, biological threats, and advances in disruptive technologies," said Daniel Holz, Chair of the Bulletin's Science and Security Board.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mirror
07-05-2025
- Daily Mirror
India Pakistan war 'would spark Armageddon with 125m deaths and global starvation'
Hundreds of millions could die 'immediately' and billions more would be hit by knock-on effects on Earth's atmosphere if nuclear war broke out between India and Pakistan, scientists said More than 100 million people could die if India and Pakistan began a devastating nuclear war, experts have warned. An academic journal published in the Bulletin of the Atomic Scientists found tens of millions of people would perish "immediately" should tensions between the two countries result in nuclear weapons being used - while huge plumes of dust released into the Earth's atmosphere could trigger famines that would affect "billions" around the world. It comes after India launched a barrage of ballistic missiles and drones into Pakistan early on Wednesday, killing at least 26 people. Pakistan described the strikes as an "act of war", and claimed it shot down several Indian fighter jets in retaliation. Tensions have soared between the nuclear-armed neighbours over a deadly attack on tourists in the Indian-controlled portion of Kashmir, which India says was carried out by terror groups based in Pakistan. Pakistan and India are estimated to have just under 400 nuclear weapons between them - and scientists say the impacts of them ever being used would stretch far beyond South Asia. In 2019, researchers from Department of Environmental Sciences at the US Rutgers University found that, after killing around 125million in the initial nuclear blasts, the huge fires could pump around 16million to 36million tons of soot into the upper atmosphere, which would spread around the world within only a few weeks. This thick soot would reduce the amount of sunlight reaching Earth's surface by around 20% to 35%, causing the Earth to cool to two to five degrees. A lack of sunlight and a reduction in the amount of precipitation would also have wider knock-on effects in terms of agriculture, potentially causing mass famines that would impact billions of people. Nuclear fallout would also spread radioactive poisoning across a wide area. The amount of smoke in Earth's atmosphere means it would take as long as a decade for things to return to normal, the scientists warned. Alan Robock, a Distinguished Professor in the Department of Environmental Sciences at Rutgers University who co-authored the study, said at the time: 'Nine countries have nuclear weapons, but Pakistan and India are the only ones rapidly increasing their arsenals. 'Because of the continuing unrest between these two nuclear-armed countries, particularly over Kashmir, it is important to understand the consequences of a nuclear war.' He added: 'Such a war would threaten not only the locations where bombs might be targeted but the entire world.'


The Independent
29-01-2025
- The Independent
New report outlines risks of artificial intelligence in early stages
Advanced artificial intelligence systems have the potential to create extreme new risks, such as fueling widespread job losses, enabling terrorism or running amok, experts said in a first-of-its-kind international report Wednesday cataloging the range of dangers posed by the technology. The International Scientific Report on the Safety of Advanced AI is being released ahead of a major AI summit in Paris next month. The paper is backed by 30 countries including the U.S. and China, marking rare cooperation between the two countries as they battle over AI supremacy, highlighted by Chinese startup DeepSeek stunning the world this week with its budget chatbot in spite of U.S. export controls on advanced chips to the country. The report by a group of independent experts is a 'synthesis' of existing research intended to help guide officials working on drawing up guardrails for the rapidly advancing technology, said Yoshua Bengio, a prominent AI scientist who led the study. 'The stakes are high,' the report says, noting that while a few years ago the best AI systems could barely spit out a coherent paragraph, now they can write computer programs, generate realistic images and hold extended conversations. While some AI harms are already widely known, such as deepfakes, scams and biased results, the report said that 'as general-purpose AI becomes more capable, evidence of additional risks is gradually emerging' and risk management techniques are only in their early stages. It comes amid warnings this week about artificial intelligence from the Vatican and the group behind the Doomsday Clock. The report focuses on general purpose AI, typified by chatbots such as OpenAI's ChatGPT used to carry out many different kinds of tasks. The risks fall into three categories: malicious use, malfunctions and widespread 'systemic' risks. Bengio, who with two other AI pioneers won computer science's top prize in 2019, said the 96 experts who came together on the report don't all agree on what to expect from AI in the future. Among the biggest disagreements within the AI research community is the timing of when the fast-developing technology will surpass human capabilities across a variety of tasks and what that will mean. 'They disagree also about the scenarios,' Bengio said. 'Of course, nobody has a crystal ball. Some scenarios are very beneficial. Some are terrifying. I think it's really important for policymakers and the public to take stock of that uncertainty.' Researchers delved into the details surrounding possible dangers. AI makes it easier, for example, to learn how to create biological or chemical weapons because AI models can provide step by step plans. But it's 'unclear how well they capture the practical challenges' of weaponizing and delivering the agents, it said. General purpose AI is also likely to transform a range of jobs and 'displace workers,' the report says, noting that some researchers believe it could create more jobs than it takes away, while others think it will drive down wages or employment rates, though there's plenty of uncertainty over how it will play out. AI systems could also run out of control, either because they actively undermine human oversight or humans pay less attention, the report said. However, a raft of factors make it hard to manage the risks, including AI developers knowing little about how their models work, the authors said. The paper was commissioned at an inaugural global summit on AI safety hosted by Britain in November 2023, where nations agreed to work together to contain potentially 'catastrophic risks.' At a follow-up meeting hosted by South Korea last year, AI companies pledged to develop AI safety while world leaders backed setting up a network of public AI safety institutes. The report, also backed by the United Nations and the European Union, is meant to weather changes in governments, such as the recent presidential transition in the U.S., leaving it up to each country to choose how it responds to AI risks. President Donald Trump rescinded former President Joe Biden's AI safety policies on his first day in office, and has since directed his new administration to craft its own approach. But Trump hasn't made any move to disband the AI Safety Institute that Biden formed last year, part of a growing international network of such centers. World leaders, tech bosses and civil society are expected to convene again at the Paris AI Action Summit on Feb 10-11. French officials have said countries will sign a 'common declaration' on AI development, and agree to a pledge on sustainable development of the technology. Bengio said the report's aim was not to "propose a particular way to evaluate systems or anything." The authors stayed away from prioritizing particular risks or making specific policy recommendations. Instead they laid out what the scientific literature on AI says 'in a way that's digestible by policymakers." 'We need to better understand the systems we're building and the risks that come with them so that we can we can take these better decisions in the future,' he said. __


NBC News
29-01-2025
- NBC News
AI, wars and climate change mean it's later than ever on the ‘Doomsday Clock'
Earth is moving closer to destruction, a science-oriented advocacy group said Tuesday as it advanced its famous 'Doomsday Clock' to 89 seconds till midnight, the closest it has ever been. The Bulletin of the Atomic Scientists made the annual announcement — which rates how close humanity is to ending — citing threats that include climate change, proliferation of nuclear weapons, instability in the Middle East, the threat of pandemics and incorporation of artificial intelligence in military operations. The clock had stood at 90 seconds to midnight for the past two years and 'when you are at this precipice, the one thing you don't want to do is take a step forward,' said Daniel Holz, chair of the group's science and security board. The group said it's concerned about cooperation between countries such as North Korea, Russia and China in developing nuclear programs. Russia President Vladimir Putin has also talked about using nuclear weapons in his war against Ukraine. 'A lot of the rhetoric is very disturbing,' Holz said. 'There is this growing sense that ... some nation might end up using nuclear weapons, and that's terrifying.' Starting in 1947, the advocacy group used a clock to symbolize the potential and even likelihood of people doing something to end humanity. After the end of the Cold War, it was as close as 17 minutes to midnight. In the past few years, to address rapid global changes, the group has changed from counting down the minutes until midnight to counting down the seconds.