
Nuclear Experts Say Mixing AI and Nuclear Weapons Is Inevitable
In the middle of July, Nobel laureates gathered at the University of Chicago to listen to nuclear war experts talk about the end of the world. In closed sessions over two days, scientists, former government officials, and retired military personnel enlightened the laureates about the most devastating weapons ever created. The goal was to educate some of the most respected people in the world about one of the most horrifying weapons ever made and, at the end of it, have the laureates make policy recommendations to world leaders about how to avoid nuclear war.
AI was on everyone's mind. 'We're entering a new world of artificial intelligence and emerging technologies influencing our daily life, but also influencing the nuclear world we live in,' Scott Sagan, a Stanford professor known for his research into nuclear disarmament, said during a press conference at the end of the talks.
It's a statement that takes as given the inevitability of governments mixing AI and nuclear weapons—something everyone I spoke with in Chicago believed in.
'It's like electricity,' says Bob Latiff, a retired US Air Force major general and a member of the Bulletin of the Atomic Scientists' Science and Security Board. 'It's going to find its way into everything.' Latiff is one of the people who helps set the Doomsday Clock every year.
'The conversation about AI and nukes is hampered by a couple of major problems. The first is that nobody really knows what AI is,' says Jon Wolfsthal, a nonproliferation expert who's the director of global risk at the Federation of American Scientists and was formerly a special assistant to Barack Obama.
'What does it mean to give AI control of a nuclear weapon? What does it mean to give a [computer chip] control of a nuclear weapon?' asks Herb Lin, a Stanford professor and Doomsday Clock alum. 'Part of the problem is that large language models have taken over the debate.'
First, the good news. No one thinks that ChatGPT or Grok will get nuclear codes anytime soon. Wolfsthal tells me that there are a lot of 'theological' differences between nuclear experts, but that they're united on that front. 'In this realm, almost everybody says we want effective human control over nuclear weapon decisionmaking,' he says.
Still, Wolfsthal has heard whispers of other concerning uses of LLMs in the heart of American power. 'A number of people have said, 'Well, look, all I want to do is have an interactive computer available for the president so he can figure out what Putin or Xi will do and I can produce that dataset very reliably. I can get everything that Xi or Putin has ever said and written about anything and have a statistically high probability to reflect what Putin has said,'' he says.
'I was like, 'That's great. How do you know Putin believes what he's said or written?' It's not that the probability is wrong, it's just based on an assumption that can't be tested,' Wolfsthal says. 'Quite frankly, I think very few of the people who are looking at this have ever been in a room with a president. I don't claim to be close to any president, but I have been in the room with a bunch of them when they talk about these things, and they don't trust anybody with this stuff.'
Last year, Air Force General Anthony J. Cotton, the military leader in charge of America's nukes, gave a long speech at a conference about the importance of adopting AI. He said the nuclear forces were 'developing artificial intelligence or AI-enabled, human led, decision support tools to ensure our leaders are able to respond to complex, time-sensitive scenarios.'
What keeps Wolfsthal up at night is not the idea that a rogue AI will start a nuclear war. 'What I worry about is that somebody will say we need to automate this system and parts of it, and that will create vulnerabilities that an adversary can exploit, or that it will produce data or recommendations that people aren't equipped to understand, and that will lead to bad decisions,' he says.
Launching a nuclear weapon is not as simple as one leader in China, Russia, or the US pushing a button. Nuclear command and control is an intricate web of early warning radar, satellites, and other computer systems monitored by human beings. If the president orders the launch of an intercontinental ballistic missile, two human beings must turn keys in concert with each other in an individual silo to launch the nuke. The launch of an American nuclear weapon is the end result of a hundred little decisions, all of them made by humans.
What will happen when AI takes over some of that process? What happens when an AI is watching the early warning radar and not a human? 'How do you verify that we're under nuclear attack? Can you rely on anything other than visual confirmation of the detonation?" Wolfsthal says. US nuclear policy requires what's called 'dual phenomenology' to confirm that a nuclear strike has been launched: An attack must be confirmed by both satellite and radar systems to be considered genuine. 'Can one of those phenomena be artificial intelligence? I would argue, at this stage, no.'
One of the reasons is basic: We don't understand how many AI systems work. They're black boxes. Even if they weren't, experts say, integrating them into the nuclear decisionmaking process would be a bad idea.
Latiff has his own concerns about AI systems reinforcing confirmation bias. 'I worry that even if the human is going to remain in control, just how meaningful that control is,' he says. 'I've been a commander. I know what it means to be accountable for my decisions. And you need that. You need to be able to assure the people for whom you work there's somebody responsible. If Johnny gets killed, who do I blame?'
Just as AI systems can't be held responsible when they fail, they're also bound by guardrails, training data, and programming. They can not see outside themselves, so to speak. Despite their much-hyped ability to learn and reason, they are trapped by the boundaries humans set.
Lin brings up Stanislav Petrov, a lieutenant colonel of the Soviet Air Defence Forces who saved the world in 1983 when he decided not to pass an alert from the Soviet's nuclear warning systems up the chain of command.
'Let's pretend, for a minute, that he had relayed the message up the chain of command instead of being quiet … as he was supposed to do … and then world holocaust ensues. Where is the failure in that?' Lin says. 'One mistake was the machine. The second mistake was the human didn't realize it was a mistake. How is a human supposed to know that a machine is wrong?'
Petrov didn't know the machine was wrong. He guessed based on his experiences. His radar told him that the US had launched five missiles, but he knew an American attack would be all or nothing. Five was a small number. The computers were also new and had worked faster than he'd seen them perform before. He made a judgement call.
'Can we expect humans to be able to do that routinely? Is that a fair expectation?' Lin says. 'The point is that you have to go outside your training data. You must go outside your training data to be able to say: 'No, my training data is telling me something wrong.' By definition, [AI] can't do that.'
Donald Trump and the Pentagon have made it clear that AI is a top priority, and have invoked the nuclear arms race to do it. In May, the Department of Energy declared in a post on X that 'AI is the next Manhattan Project, and the UNITED STATES WILL WIN.' The administration's 'AI Action Plan' depicted the rush towards artificial intelligence as an arms race, a competition against China that must be won.
'I think it's awful,' Lin says of the metaphors. 'For one thing, I knew when the Manhattan Project was done, and I could tell you when it was a success, right? We exploded a nuclear weapon. I don't know what it means to have a Manhattan Project for AI.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
11 minutes ago
- Yahoo
We're Not Very Worried About Odysight.ai's (NASDAQ:ODYS) Cash Burn Rate
Explore Fair Values from the Community and select yours There's no doubt that money can be made by owning shares of unprofitable businesses. For example, although made losses for many years after listing, if you had bought and held the shares since 1999, you would have made a fortune. But while the successes are well known, investors should not ignore the very many unprofitable companies that simply burn through all their cash and collapse. Given this risk, we thought we'd take a look at whether (NASDAQ:ODYS) shareholders should be worried about its cash burn. In this article, we define cash burn as its annual (negative) free cash flow, which is the amount of money a company spends each year to fund its growth. First, we'll determine its cash runway by comparing its cash burn with its cash reserves. Trump has pledged to "unleash" American oil and gas and these 15 US stocks have developments that are poised to benefit. When Might Run Out Of Money? A company's cash runway is the amount of time it would take to burn through its cash reserves at its current cash burn rate. As at March 2025, had cash of US$37m and no debt. In the last year, its cash burn was US$9.3m. Therefore, from March 2025 it had 4.0 years of cash runway. A runway of this length affords the company the time and space it needs to develop the business. Depicted below, you can see how its cash holdings have changed over time. Check out our latest analysis for How Well Is Growing? At first glance it's a bit worrying to see that actually boosted its cash burn by 4.4%, year on year. Given that its operating revenue increased 100% in that time, it seems the company has reason to think its expenditure is working well to drive growth. If revenue is maintained once spending on growth decreases, that could well pay off! We think it is growing rather well, upon reflection. Clearly, however, the crucial factor is whether the company will grow its business going forward. So you might want to take a peek at how much the company is expected to grow in the next few years. Can Raise More Cash Easily? We are certainly impressed with the progress has made over the last year, but it is also worth considering how costly it would be if it wanted to raise more cash to fund faster growth. Generally speaking, a listed business can raise new cash through issuing shares or taking on debt. Commonly, a business will sell new shares in itself to raise cash and drive growth. By looking at a company's cash burn relative to its market capitalisation, we gain insight on how much shareholders would be diluted if the company needed to raise enough cash to cover another year's cash burn. Since it has a market capitalisation of US$72m, US$9.3m in cash burn equates to about 13% of its market value. Given that situation, it's fair to say the company wouldn't have much trouble raising more cash for growth, but shareholders would be somewhat diluted. How Risky Is Cash Burn Situation? It may already be apparent to you that we're relatively comfortable with the way is burning through its cash. For example, we think its revenue growth suggests that the company is on a good path. While its increasing cash burn wasn't great, the other factors mentioned in this article more than make up for weakness on that measure. After taking into account the various metrics mentioned in this report, we're pretty comfortable with how the company is spending its cash, as it seems on track to meet its needs over the medium term. On another note, we conducted an in-depth investigation of the company, and identified 4 warning signs for (2 can't be ignored!) that you should be aware of before investing here. Of course, you might find a fantastic investment by looking elsewhere. So take a peek at this free list of interesting companies, and this list of stocks growth stocks (according to analyst forecasts) Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Geek Wire
13 minutes ago
- Geek Wire
Week in Review: Most popular stories on GeekWire for the week of Aug. 3, 2025
Get caught up on the latest technology and startup news from the past week. Here are the most popular stories on GeekWire for the week of Aug. 3, 2025. Sign up to receive these updates every Sunday in your inbox by subscribing to our GeekWire Weekly email newsletter. Most popular stories on GeekWire


The Verge
14 minutes ago
- The Verge
We found stuff AI is pretty good at
Tech companies keep telling everyone that this or that AI feature is going to change everything. But when you press them for examples, real, concrete examples of how those AI tools should be used, the answers tend to be lackluster. Sometimes AI tools feel so open-ended, it's hard to know where to start or what the best way to use them might be Well, here at The Verge, we have to test all these AI tools to better report on the features and the companies building them. And we've found scenarios that were actually useful. In this bonus episode of The Vergecast, Senior Reviewer Victoria Song sits down with a bunch of Verge staffers to talk about how they use AI tools in their everyday lives. Not all of it went smoothly — we definitely get into the ways these tools fall short — but we explore how AI can be used to help bedtime go more smoothly for parents, plan big cross-country moves, supplement your internet searches (always double-check!), and even vibe code an app for your next tabletop role-playing game. Subscribe: Spotify | Apple Podcasts | Overcast | Pocket Casts | More If you have any examples where AI was useful to you, we'd love to hear them. (For what it's worth, we'd also love to hear stories where it spectacularly failed.) You can email [email protected] or call into the Verge Hotline at from this author will be added to your daily email digest and your homepage feed. See All by Victoria Song Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Podcasts Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech