logo
#

Latest news with #ArvidBell

Will US join the Israel-Iran war? Ex-Harvard professor says his AI can predict wars
Will US join the Israel-Iran war? Ex-Harvard professor says his AI can predict wars

Time of India

time9 hours ago

  • Business
  • Time of India

Will US join the Israel-Iran war? Ex-Harvard professor says his AI can predict wars

As the Israel-Iran war enters its sixth day, global concerns are growing over potential U.S. involvement. While President Trump claims the U.S. is not directly part of the conflict, American fighter jets have entered Iranian airspace, raising questions about Washington's role. Amid escalating missile exchanges, former Harvard professor Arvid Bell has introduced an AI tool called North Star, designed to simulate geopolitical scenarios and predict war outcomes. The system uses digital models of world leaders to forecast responses to military actions. While the AI aims to prevent war through early insights, critics warn of its potential misuse. Tired of too many ads? Remove Ads AI That Predicts War: How It Works The Ground Reality: Sixth Day of Conflict Tired of too many ads? Remove Ads US Involvement: Ambiguity and Pressure Can AI Prevent What Comes Next? As missile exchanges continue between Iran and Israel, the global community is on edge — not only over the potential spread of violence but also over whether the United States will deepen its involvement. While U.S. President Donald Trump has stated that Washington is not directly participating in Israel's ongoing military operations, the rhetoric and movement of American forces suggest a complex strategic stance. Meanwhile, an ex-Harvard academic claims that his AI system may already know what happens Business Insider, at the recent AI+ Expo in Washington, Arvid Bell, a former Harvard professor and current CEO of the startup Anadyr Horizon, showcased an AI tool called North Star. The system, designed in collaboration with Nobel Prize-winning physicist Ferenc Dalnoki-Veress, generates simulations of geopolitical conflict using artificial 'digital twins' of world leaders. These models respond to different scenarios — such as economic sanctions or military actions — offering probabilistic forecasts of how real-world events might AI was previously used to simulate the consequences of enforcing a no-fly zone over Ukraine, predicting a 60% chance of further escalation by Russia. Some believe similar modeling could help anticipate developments in the Israel-Iran conflict and guide diplomatic choices before violence spirals out of Israel-Iran conflict, now in its sixth consecutive day, has escalated significantly. Israel launched 'Operation Rising Lion' on June 13, targeting Iranian nuclear sites. Tehran responded with missiles and drones, claiming to have fired hypersonic Fattah-1 missiles into Israeli territory. The Israeli military has reported multiple missile barrages from Iran, with explosions heard in Tel Aviv. Meanwhile, Israeli airstrikes targeted military installations near Tehran, prompting civilian evacuation Supreme Leader Ayatollah Ali Khamenei has vowed to retaliate without restraint, while Israeli Prime Minister Benjamin Netanyahu has declared further attacks are imminent. Iran also claims to have hit intelligence sites in Tel Aviv, while Israel asserts that it has maintained control over Iran's death toll is rising. Iranian sources report over 220 fatalities, including dozens of civilians, while Israeli officials acknowledge at least 20 deaths on their side. Humanitarian concerns are mounting, with international groups confirming casualties and injuries on both President Trump's public denial of direct U.S. participation, he has issued a warning to Iran and called for its 'unconditional surrender.' Reports also confirm that U.S. fighter jets have entered Iranian airspace, raising questions about America's neutrality. At the G7 Summit in Canada, world leaders urged de-escalation. French President Emmanuel Macron stated that Trump was considering a ceasefire proposal, while Germany noted that Israel and the U.S. combined could potentially dismantle Iran's nuclear U.S. intelligence has acknowledged Iran's extensive ballistic missile arsenal, warning of its deterrent power across the region. Concerns have also emerged about Iran's ability to target American bases and hopes his AI model can offer early warnings to prevent conflicts rather than merely predict them. However, critics caution against over-reliance on such systems. Some fear that governments may act rashly based on AI-driven forecasts, possibly triggering the very wars they hope to avoid. Others are wary of the opaque nature of the technology and its funding links to defense-sector interest in the tool is growing. Anadyr Horizon has drawn support from high-level entities, including former Google CEO Eric Schmidt's office. Bell maintains that the system is meant to safeguard peace — not manipulate the situation between Israel and Iran intensifies, all eyes are on whether U.S. involvement will deepen — and whether AI might soon play a role in shaping not just how wars are fought, but whether they begin at all.

Ex-Harvard Professor Says His AI Can Predict The Next War: "I Want to Prevent, Not Destroy"
Ex-Harvard Professor Says His AI Can Predict The Next War: "I Want to Prevent, Not Destroy"

NDTV

time11 hours ago

  • Business
  • NDTV

Ex-Harvard Professor Says His AI Can Predict The Next War: "I Want to Prevent, Not Destroy"

As global tensions rise and conflicts between nations escalate-whether on the battlefield or in cyberspace-many experts and even mystics warn of a looming world war. Yet, this era is not only marked by conflicts but also by rapid technological advancement, especially in AI and machine learning. Amid this dual reality, Arvid Bell, cofounder of Anadyr Horizon, introduces a hopeful concept: "peace tech". He claims his software is designed not to fight wars, but to prevent them. "I want to simulate what breaks the world. I don't want to break the world," Arvid Bell said. According to The Business Insider, the former Harvard political scientist begins by noting that Russia's all-out invasion of Ukraine had come as a surprise to many political scientists. But his AI software, he says, could predict it. The world witnesses a surge in global conflict, with alarming developments in Israel-Iran, India-Pakistan, Gaza, and Ukraine. These conflicts have resulted in significant loss of life, highlighting the need for innovative solutions like AI to anticipate and mitigate humanity's destructive tendencies amid rising tensions. Anadyr Horizon's North Star software uses AI to simulate world leaders' behaviour in various scenarios, predicting their reactions to stimuli like economic sanctions. These "digital twins" can mimic leaders' personalities, even accounting for factors like sleep deprivation, to forecast potential conflicts and prevent wars. According to Business Insider, he hopes North Star's predictive capabilities will help diplomats and politicians make better decisions about how to negotiate during times of conflict and even prevent wars. Anadyr is a reference to the code name the USSR used for its deployment of ballistic missiles and warfighters to the western coasts of Cuba in October 1962. If President John F. Kennedy had a tool like North Star to preempt the Cuban Missile Crisis, Bell posits, instead of having 13 days to respond, he might have had six months. "We are reclaiming this name to say, 'OK, the next Operation Anadyr, we will detect early,'" he says.

The war over the peace business
The war over the peace business

Business Insider

time2 days ago

  • Business
  • Business Insider

The war over the peace business

At the second annual AI+ Expo in Washington, DC, in early June, war is the word of the day. As a mix of Beltway bureaucrats, military personnel, and Washington's consultant class peruse the expansive Walter E. Washington Convention Center, a Palantir booth showcases its latest in data-collection suites for "warfighters." Lockheed Martin touts the many ways it is implementing AI throughout its weaponry systems. On the soundstage, the defense tech darling Mach Industries is selling its newest uncrewed aerial vehicles. "We're living in a world with great-power competition," the presenter says. "We can't rule out the possibility of war — but the best way to prevent a war is deterrence," he says, flanked by videos of drones flying through what looked like the rugged mountains and valleys of Kandahar. Hosted by the Special Competitive Studies Project, a think tank led by former Google CEO Eric Schmidt, the expo says it seeks to bridge the gap between Silicon Valley entrepreneurs and Washington policymakers to "strengthen" America and its allies' "competitiveness in critical technologies." One floor below, a startup called Anadyr Horizon is making a very different sales pitch, for software that seeks to prevent war rather than fight it: "Peace tech," as the company's cofounder Arvid Bell calls it. Dressed in white khakis and a black pinstripe suit jacket with a dove and olive branch pinned to his lapel (a gift from his husband), the former Harvard political scientist begins by noting that Russia's all-out invasion of Ukraine had come as a surprise to many political scientists. But his AI software, he says, could predict it. Long the domain of fantasy and science fiction, the idea of forecasting conflict has now become a serious pursuit. In Isaac Asimov's 1950s "Foundation" series, the main character develops an algorithm that allows him to predict the decline of the Galactic Empire, angering its rulers and forcing him into exile. During the coronavirus pandemic, the US State Department experimented with AI fed with Twitter data to predict "COVID cases" and "violent events." In its AI audit two years ago, the State Department revealed that it started training AI on "open-source political, social, and economic datasets" to predict "mass civilian killings." The UN is also said to have experimented with AI to model the war in Gaza. Interest in AI's ability to anticipate humanity's most destructive urges comes as the world sees an alarming rise in global conflict. Last week, Israel launched strikes against Tehran under the pretense that the Ayatollah was inching closer to developing a nuclear bomb. A month ago, India and Pakistan came to the brink of war over the decadeslong dispute in Kashmir. The ongoing Gaza conflict has claimed more than 50,000 lives, and the war in Ukraine more than 150,000, according to conservative estimates. On the screen, North Star looks like the 1970s video game "The Oregon Trail," with reams of text chronicling events from the simulated world. Anadyr Horizon believes it can prevent war by using AI to create facsimiles of world leaders that live and interact in a simulation of the real world. Its software, North Star, is designed to predict how real-world decision-makers might react to a given situation or stimuli — like economic sanctions or a naval blockade — by observing their AI counterparts. Bell says these digital twins are so sophisticated that they can emulate how a leader like Vladimir Putin might behave when he's sleep deprived versus having gotten a full night's rest. (Much of the formula behind these scores is Anadyr Horizon's secret sauce, but the Russian autocrat has boasted that he gets by on four hours of sleep.) On the screen, North Star looks like the 1970s video game "The Oregon Trail," with reams of text chronicling events from the simulated world. Bell demonstrates by prompting it to show what would happen if the United States imposed a no-fly zone over Ukraine — a maneuver with a mixed history (e.g., effective in preventing Saddam Hussein's regime from carpet-bombing southern Iraq in 1991; ineffective in preventing the Srebrenica massacre of 1995). North Star then runs thousands of simulations, each its own multiverse with slightly different variables, from whether a key decision-maker was late to work to the order in which a single conversation over military strategy might happen. It can model the outcomes of multiple different policies, and even offer advice on which leaders might be open to back channel negotiations. Eventually the program spits out a result: Russia is 60% likely to escalate the conflict if a no-fly zone is imposed. It also provides a hypothetical SVR intelligence brief describing Russia's devastating escalation. "Over the past 24 hours, we have delivered high-precision strikes on enemy troop concentrations and military equipment, destroying ammunition depots, communication hubs, and infrastructure," it reads. For Bell, the origins of Anadyr Horizon were relatively low-tech. Back when he was a lecturer at Harvard — teaching courses in conflict de-escalation and post-Soviet geopolitics — he used funds from the university's startup incubator, the Scholar-Entrepreneur Initiative, to launch an organization that held a yearly war gaming event with high-profile representatives from the United States and the European Union. Military and diplomatic leaders would join Bell in Cambridge and be assigned the role of a foreign dignitary — Gen. Zhang Youxia, the vice chairman of China's Central Military Commission, say. For three days of the event, they would become method actors going to painstaking lengths to re-create the lives of their assigned leaders as realistically as possible, wearing the same military fatigues, chauffeured in their preferred cars, and conferring in a replica of their situation room. "It's like theater," Bell tells me. Bell, who'd been traveling to Afghanistan as a Ph.D. researcher over the past decade, drew from his experience studying breakdowns in negotiations during armed conflict. His goal was to have participants empathize with those they might otherwise see as their adversary. During one such event in December 2021, the Russian team invaded Ukraine using the exact multipronged military assault that Putin would use 75 days later. Two years later, he was introduced to Ferenc Dalnoki-Veress, a Nobel Prize-winning physicist, through Bill Potter, the founding director of the James Martin Center for Nonproliferation Studies at Middlebury College. At the time, Dalnoki-Veress was experimenting with prompting different AI agents to debate each other. "It was something stupid," Dalnoki-Veress says. "Like one agent has to prove to the other agent that cherry lollipops are better than lemon lollipops." Dalnoki-Veress was impressed by the lengths that the robots would go to convince the others of their position. Sometimes they'd even lie to each other — if prompted to win at all costs. Soon he started experimenting with whether the agents would collaborate, negotiate, or even write treaties with one another. "It occurred to me how human it was," says Dalnoki-Veress. Last fall, the trio of academics founded Anadyr with the belief they could imbue these bots with the personalities of world leaders and realistically emulate their interactions. Instead of organizing one war game a year, they could game out hundreds of thousands of scenarios over the course of one night and get "probabilistic estimates of where conflicts are really going to happen," Bell says. He hopes North Star's predictive capabilities will help diplomats and politicians make better decisions about how to negotiate during times of conflict and even prevent wars. Anadyr is a reference to the code name the USSR used for its deployment of ballistic missiles and warfighters to the western coasts of Cuba in October 1962. If President John F. Kennedy had a tool like North Star to preempt the Cuban Missile Crisis, Bell posits, instead of having 13 days to respond, he might have had six months. "We are reclaiming this name to say, 'OK, the next Operation Anadyr, we will detect early,'" he says. In doing so, the company and its venture capital backers believe it can make billions. By some estimates violent conflict cost the global economy $19 trillion in 2023 alone. And one study conducted by the International Monetary Fund suggests every dollar spent on conflict prevention can yield a return as high as $103 in countries that have recently experienced violent conflict. "Peace tech is going after a huge market," says Brian Abrams, a founder of B Ventures, an investor in Anadyr. "If you look at climate tech, a decade ago, the space was very small. It wasn't even called climate tech," he adds. "Now, climate tech sees about $50 billion in investment annually." He says peace tech can replicate the growth seen in the climate tech industry. Anadyr's early clients aren't confined to just state agencies; the company is also selling its software to corporate risk managers who want to understand how social unrest might affect their investments and assets in different countries. Anadyr has also raised funds from Commonweal Ventures, an early investor in the defense contractor Palantir, and AIN Ventures, a veteran-led firm that invests in technologies that can be useful in both the military and in the private sector. Bell says they've already been able to close a seven-figure pre-seed round, though he didn't disclose the exact figures. That a company dedicated to preventing war had chosen a defense expo to unveil its product wasn't lost on Bell. But the lines between peace and war technology are blurrier than they may seem. The defense contractor Rhombus Power, a sponsor of the expo, has its own AI conflict prediction software that it says made accurate predictions of Russia's invasion of Ukraine. "We look at peace tech as the flip side of the same coin," Abrams says. According to Abrams, the size of the defense industry shows that there is a market for technology seeking to prevent war. "The difference," he says, between peace tech and war tech is "a different approach to the same problem." "I want to simulate what breaks the world. I don't want to break the world." Arvid Bell, Anadyr Horizon cofounder Even the audience at Bell's demo had its fair share of defense tech funders in attendance. When one of the venture capitalists in the crowd asks whether he's considered the technology's military applications, he tells them that's a line too far for Anadyr Horizon, at present. "For now we're definitely focused on the strategic level," he says. "Because we're trying to stop war." A savvy salesman, he adds — "we're still early enough to see where the market will pull us." Over lunch, I ask the founders if they believe something is lost in automating the war games Bell conducted at Harvard. "What you're losing," Bell concedes, "is the extremely personal and emotional experience of an American admiral who is put into the shoes of his Chinese counterpart, and for the first time is looking at American warships coming to his coast." But you can only run such a realistic simulation with real people a few times a year. "The capabilities of AI are exponential," he says. "The impact is on a much greater scale." There are other challenges with using artificial intelligence for something as high-stakes as preventing the next world war. Researchers have long warned that AI models may hold biases hidden in the data from which they were trained. "People say history is written by the victor," says Timnit Gebru, an AI researcher who fled Eritrea in 1998, during the country's war with Ethiopia. An AI system trained on open-source information on the internet, she says, will inherently represent the biases of the most online groups — which tend to be Western or European. "The notion that you're going to use a lot of data on the internet and therefore represent some sort of unbiased truth is already fraught," Gebru adds. The founders are unwilling to reveal the actual data their digital world leaders are trained on, but they do offer that Anadyr Horizon uses "proprietary datasets, open-source intelligence, and coded behavioral inputs" — and that they go to great lengths to use books and data from outside the English-speaking world to account for differing world views. The leaders they emulate, Bell says, use as many as 150 datapoints. Biases in these new AI systems are especially hard to interrogate not only because of this lack of transparency around the data used by the systems, but also because of how the chatbots interpret that information. In the case of generative AI, " intelligence" is a misnomer — "they're essentially spitting out a likely sequence of words," Gebru says. This is why bots are so prone to confidently expressing falsehoods. They don't actually know what they're saying. It's also hard to trace why they make certain decisions. " Neural networks aren't explainable," Gebru says."It's not like a regression model where you can go back and see how and why it made predictions in certain ways." One study into the use of large language models for diplomatic decision-making, conducted last year by researchers at Stanford, found that AI models tended to be warmongers. "It appears LLM-based agents tended to equate increased military spending and deterrent behavior with an increase in power and security," the researchers wrote. "In some cases, this tendency even led to decisions to execute a full nuclear attack in order to de-escalate conflicts." A trigger-happy AI system could have severe consequences in the real world. For example, if hedge funds or corporations act collectively on a prediction from a tool like North Star that a country in which they have heavily invested is on the brink of collapse, and they preemptively sell off their assets, it may lead to the very situation the system had predicted — the mass exodus of capital actually causing currency depreciation, unemployment, and liquidity crises. "The claim that a place is unstable, will make it unstable," Gebru explains. For now, that problem seems academic to Bell. "It is a bit like a philosophical or ethical problem. Like the butterfly effect," he says. "I think we'll have to wrestle with it at some point." He insists they aren't the typical "move fast, break things" tech company. They're deliberate about consulting with subject-area experts as they model different countries and world leaders, he says, and have released the product to only a select list of firms. "I want to simulate what breaks the world. I don't want to break the world." Jon Danilowicz, a former senior diplomat who served in South Sudan, Pakistan, and Bangladesh, notes the inherent unpredictability of war, with contingencies and factors that can't always be accounted for. "If you look at what's going to happen with Israel taking action against Iran's nuclear program, you can do all kinds of scenarios on how that's going to play out. I'm sure there's somebody who's going to make a prediction, which after the fact they'll be able to say they were right. But what about the multiples that got it totally wrong?" "There's never certainty in these kinds of decision-making situations," says Bell. "In some senses, we can't predict the future. But we can assign probabilities. Then it's up to the user to decide what to do." In the meantime, the company has more pressing problems. Like many startups building on top of generative AI models, the costs to run North Star are huge. "I don't want to give you a figure, but if you knew, you'd drop your fork. It's extremely expensive," he says. On top of that, contracting with the government and receiving the necessary clearances can be its own set of bureaucratic red tape and expenses. Despite this, there seems to be no shortage of interest in their technology. As I leave Bell and Dalnoki-Veress, they rush off to another meeting: Eric Schmidt's office wanted a private demonstration.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store