logo
Threats and tension after Iran found in breach of nuclear obligations

Threats and tension after Iran found in breach of nuclear obligations

The National13-06-2025
Tension is escalating in the Middle East after the UN's nuclear watchdog found Iran in breach of its non-proliferation obligations. Hundreds were killed in a plane crash in India yesterday. Activists heading to Gaza through Egypt are facing arrest and deportation. On today's episode of Trending Middle East: UN nuclear watchdog finds Iran in breach of non-proliferation duties Timeline: Diplomacy and confrontation over Iran's nuclear programme 'No survivors' after Air India flight from Ahmedabad to London Gatwick crashes into city Boeing shares drop after Air India crash Activists face arrest and deportation from Egypt as symbolic Global March to Gaza stalls This episode features Aveen Karim, Assistant Foreign Editor; Paul Carey, London Deputy Bureau Chief; and Kamal Tabikha, Cairo Correspondent.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Hundreds of Israeli ex-security officials urge Trump to help end Gaza war
Hundreds of Israeli ex-security officials urge Trump to help end Gaza war

The National

time32 minutes ago

  • The National

Hundreds of Israeli ex-security officials urge Trump to help end Gaza war

Hundreds of retired Israeli security officials, including former heads of intelligence agencies, have urged US President Donald Trump to pressure their own government to end the war in Gaza. 'It is our professional judgment that Hamas no longer poses a strategic threat to Israel,' the former officials wrote in an open letter, shared with the media on Monday. 'At first this war was a just war, a defensive war, but when we achieved all military objectives, this war ceased to be a just war,' said Ami Ayalon, former director of the Shin Bet security service. The war, nearing its 23rd month, 'is leading the state of Israel to lose its security and identity', Mr Ayalon warned in a video released to accompany the letter. Signed by 550 people, including former chiefs of Shin Bet and the Mossad spy agency, the letter called on Mr Trump to 'steer' Prime Minister Benjamin Netanyahu towards a ceasefire. Israel launched its war on Gaza in response to the October 7, 2023, Hamas-led attack on Israeli communities, in which militants killed about 1,200 people and abducted around 240. The Israeli military has killed more than 60,000 Palestinians − mostly children and women − and caused starvation by using aid as a weapon. In recent weeks, Israel has come under increasing international pressure to agree to a ceasefire that could see the remaining Israeli hostages held in Gaza released and UN agencies distribute humanitarian aid. But some in Israel, including ministers in Mr Netanyahu's coalition government, are instead pushing for Israeli forces to push on and for Gaza to be occupied in whole or in part. Three former Mossad heads signed the letter. Other signatories include five former heads of Shin Bet and three former military chiefs of staff. The letter argued that the Israeli military 'has long accomplished the two objectives that could be achieved by force: dismantling Hamas's military formations and governance'. 'The third, and most important, can only be achieved through a deal: bringing all the hostages home,' it added. 'Chasing remaining senior Hamas operatives can be done later,' the letter said. In the letter, the former officials tell Mr Trump that he has credibility with the majority of Israelis and can put pressure on Mr Netanyahu to end the war and return the hostages. After a ceasefire, the signatories argue, Mr Trump could force a regional coalition to support a reformed Palestinian Authority to take charge of Gaza as an alternative to Hamas rule.

Israeli ex-security chiefs urge Trump to help end Gaza war
Israeli ex-security chiefs urge Trump to help end Gaza war

Khaleej Times

time3 hours ago

  • Khaleej Times

Israeli ex-security chiefs urge Trump to help end Gaza war

More than 600 retired Israeli security officials including former heads of intelligence agencies have urged US President Donald Trump to pressure their own government to end the war in Gaza. "It is our professional judgement that Hamas no longer poses a strategic threat to Israel," the former officials wrote in an open letter shared with the media on Monday, calling on Trump to "steer" Prime Minister Benjamin Netanyahu's decisions.

Incorrect Grok answers amid Gaza devastation show risks of blindly trusting AI
Incorrect Grok answers amid Gaza devastation show risks of blindly trusting AI

The National

time3 hours ago

  • The National

Incorrect Grok answers amid Gaza devastation show risks of blindly trusting AI

A spike in misinformation amid the dire situation in Gaza has highlighted how imperfect artificial intelligence systems are being used to perpetuate it. Reaction to a recent social media post from US Senator Bernie Sanders in which he shared a photo of an emaciated child in the besieged Palestinian enclave shows just how fast AI tools can spur the spread of incorrect narratives. In the post on X, he accused Israeli Prime Minister Benjamin Netanyahu of lying by promoting the idea that there was "no starvation in Gaza". A user asked Grok, X's AI chatbot, for more information on the origin of the images. "These images are from 2016, showing malnourished children in a hospital in Houdieda, Yemen, amid the civil war there ... They do not depict current events in Gaza," Grok said. Several other users, however, were able to verify that the images were in fact recently taken in Gaza, but initially those voices were overtaken by hundreds who reposted Grok's incorrect answer. Proponents of Israel's continuing strategy in Gaza used the false information from Grok to perpetuate the narrative that the humanitarian crisis in Gaza was being exaggerated. Initially, when some users tried to tell the chatbot that is was wrong, and explained why it was wrong, Grok stood firm. "I'm committed to facts, not any agenda ... the images are from Yemen in 2016," it insisted. "Correcting their misuse in a Gaza context isn't lying – it's truth." Later however, after metadata and sources confirmed that the photos had been taken in Gaza, Grok apologised. Another recent incident involving Grok's confident wrong answers about the situation in Gaza also led to the spread of falsehoods. Several images began circulating on social media purporting to show people in Egypt filling bottles with food and throwing them into the sea with hopes of them reaching Gaza. While there were several videos that showed similar efforts, many of the photos circulating were later determined to be fake, according to PolitiFact, a non-partisan independent reporting fact-check organisation. This is not the first time Grok's answers have come under scrutiny. Last month, the chatbot started to answer user prompts with offensive comments, including giving anti-Semitic answers to prompts and praising Adolf Hitler. High stakes and major consequences AI chatbot enthusiasts are quick to point out that the technology is far from perfect and that it continues to learn. Grok and other chatbots include disclaimers warning users that they can be prone to mistakes. In the fast-paced social media world, however, those fine-print warnings are often forgotten, while the risks from the ramifications of misinformation increase substantially – most recently with regard to the Gaza war. Israel's campaign in Gaza – which followed the 2023 attacks by Hamas-led fighters that resulted in the deaths of about 1,200 people and the capture of 240 hostages – has killed more than 60,200 people and injured about 147,000. The war has raged against a backdrop of technology development that's causing ample confusion. "This chilling disconnect between reality and narratives online and in the media has increasingly become a feature of modern war," wrote Mahsa Alimardani and Sam Gregory in a recent analysis on AI and conflict for the Carnegie Endowment think tank. The experts pointed out that while several tools can be used to verify photos and video in addition to flagging possible AI manipulation, it will take broader efforts to prevent the spread of misinformation. Technology companies, they say, must "share the burden by embedding provenance responsibly, facilitating globally effective detection, flagging materially deceptive manipulated content, and doubling down on protecting users in high-risk regions". AI's triumphs and continuing tribulations A lot of the recent misinformation and disinformation controversies related to AI and modern conflict can be traced back to the various AI tools and how they handle images. Stretching back to the earliest days of AI, particularly in the 1970s and 1980s, researchers sought to replicate the human brain – more specifically the brain's neural networks that consist of neurons and electrical signals that strengthen over time, giving humans the ability to reason, remember and identify. As computer processors have become increasingly powerful and more economical, replicating those neural networks – often called "artificial neural networks" in the technology world – have become significantly easier. The internet, with its seemingly endless photos, videos and data, has also become a way for those neural networks to be constantly trained on information. Some of the earliest uses of AI involved software that made it possible to identify images. This was demonstrated back in 2012 by Alex Krizhevsky, then a student at the University of Toronto, whose research was overseen by British-Canadian computer scientist Geoffrey Hinton, considered to be the godfather of AI. "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images," his paper on deep convolutional neural networks read. "Our results show that a large, deep convolutional neural network is capable of achieving record-breaking results on a highly challenging data set." He added, however, that the network had the potential to degrade and pointed out it was far from perfect. AI has since improved by leaps and bounds, though there is still room for improvement. The latest AI chatbots like OpenAI's ChatGPT and Google's Gemini have capitalised on powerful CPUs and GPUs, making it possible for just about anyone to upload an image and ask the chatbot to explain what the image is showing. For example, some users have uploaded pictures of plants they can't recognise into chatbots to identify. When it works, it is helpful; when it doesn't, it's usually harmless. In the world of mass media, however, and more broadly the world of social media, when chatbots are wrong – such as Grok was about the Gaza photos – the consequences can have wide-reaching effects.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store