logo
Israel orders military to implement Gaza decisions amid strategy rift

Israel orders military to implement Gaza decisions amid strategy rift

The Sun16 hours ago
JERUSALEM: Israel's defence minister has declared that the military must carry out any government decisions regarding Gaza, following reports of internal disagreements over a potential full occupation of the territory.
Prime Minister Benjamin Netanyahu is expected to finalise a new strategy soon, with his security cabinet set to meet on Thursday.
Netanyahu has repeatedly emphasised the need to 'complete' the defeat of Hamas to secure the release of hostages taken during the October 2023 attack.
Israeli media suggests an escalation of military operations, including in densely populated areas like Gaza City and refugee camps where hostages may be held.
The military has issued fresh evacuation warnings for parts of Gaza City and Khan Yunis, signalling an expansion of ground operations.
Reports indicate tensions between Netanyahu and armed forces chief Lieutenant General Eyal Zamir over the feasibility of a full occupation.
Zamir reportedly warned that such a move would be akin to 'walking into a trap' during a recent security meeting.
Defence Minister Israel Katz affirmed that while military leaders can voice concerns, the army must follow government decisions.
Opposition leader Yair Lapid has criticised the idea of occupying Gaza, calling it operationally, morally, and economically unwise.
US President Donald Trump stated that any decision on Gaza's occupation is 'up to Israel,' distancing himself from the issue.
Pressure is mounting on Israel to end the war, with growing concerns over Gaza's humanitarian crisis and the fate of remaining hostages.
Only 49 of the 251 hostages taken in 2023 remain in Gaza, with 27 presumed dead by Israeli authorities.
The UN has warned of famine in Gaza, with just 1.5% of farmland accessible and undamaged.
FAO director-general Qu Dongyu stated that starvation is worsening due to blocked access and collapsed food systems.
A recent incident involving an overturned aid truck killed at least 22 people in central Gaza, with accusations of Israeli obstruction.
Israel denies involvement, though aid restrictions remain a contentious issue despite a partial easing in May.
The war, triggered by Hamas's 2023 attack, has resulted in over 61,000 Palestinian deaths, mostly civilians, according to Gaza's health ministry. - AFP
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Trusting AI? Grok's Gaza photo error says not yet
Trusting AI? Grok's Gaza photo error says not yet

New Straits Times

timean hour ago

  • New Straits Times

Trusting AI? Grok's Gaza photo error says not yet

AN image by AFP photojournalist Omar al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel's blockade has fuelled fears of mass famine in the Palestinian territory. But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence (AI) chatbot was certain that the photograph was taken in Yemen nearly seven years ago. The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. At a time when Internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free. Grok said the photo showed Amal Hussain, a 7-year-old Yemeni child, in October 2018. In fact the photo shows 9-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on Aug 2, 2025. Before the war, sparked by Hamas's Oct 7, 2023 attack on Israel, Mariam weighed 25kg, said her mother. Today, she weighs only 9kg. The only nutrition she gets to help her condition is milk, said Modallala — and even that's "not always available". Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources." The chatbot eventually issued a response that recognised the error — but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen. The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate. Grok's mistakes illustrated the limits of AI tools, whose functions were as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics. "We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, Hello ChatGPT. Each AI had biases linked to the information it was trained on and the instructions of its creators, he said. In the researcher's view Grok, made by Musk's xAI startup, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of United States President Donald Trump and a standard-bearer for the radical right. Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach. "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'." AI did not necessarily seek accuracy — "that's not the goal", said the expert. Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016. That error led to Internet users accusing the French newspaper Liberation, which had published the photo, of manipulation. An AI's bias is linked to the data it is fed and what happens during fine-tuning — the so-called alignment phase — which then determines what the model would rate as a good or bad answer. "Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," said Diesbach. "Its training data has not changed and neither has its alignment." Grok is not alone in wrongly identifying images. When AFP asked Mistral AI's Le Chat — which is in part trained on AFP's articles under an agreement between the French startup and the news agency — the bot also misidentified the photo of Mariam Dawwas as being from Yemen. For Diesbach, chatbots must never be used as tools to verify facts. "They are not made to tell the truth," but to "generate content, whether true or false", he said.

When AI image checks mislocate news photos
When AI image checks mislocate news photos

New Straits Times

timean hour ago

  • New Straits Times

When AI image checks mislocate news photos

AN image by AFP photojournalist Omar al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel's blockade has fuelled fears of mass famine in the Palestinian territory. But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence (AI) chatbot was certain that the photograph was taken in Yemen nearly seven years ago. The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. At a time when Internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free. Grok said the photo showed Amal Hussain, a 7-year-old Yemeni child, in October 2018. In fact the photo shows 9-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on Aug 2, 2025. Before the war, sparked by Hamas's Oct 7, 2023 attack on Israel, Mariam weighed 25kg, said her mother. Today, she weighs only 9kg. The only nutrition she gets to help her condition is milk, said Modallala — and even that's "not always available". Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources." The chatbot eventually issued a response that recognised the error — but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen. The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate. Grok's mistakes illustrated the limits of AI tools, whose functions were as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics. "We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, Hello ChatGPT. Each AI had biases linked to the information it was trained on and the instructions of its creators, he said. In the researcher's view Grok, made by Musk's xAI startup, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of United States President Donald Trump and a standard-bearer for the radical right. Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach. "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'." AI did not necessarily seek accuracy — "that's not the goal", said the expert. Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016. That error led to Internet users accusing the French newspaper Liberation, which had published the photo, of manipulation. An AI's bias is linked to the data it is fed and what happens during fine-tuning — the so-called alignment phase — which then determines what the model would rate as a good or bad answer. "Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," said Diesbach. "Its training data has not changed and neither has its alignment." Grok is not alone in wrongly identifying images. When AFP asked Mistral AI's Le Chat — which is in part trained on AFP's articles under an agreement between the French startup and the news agency — the bot also misidentified the photo of Mariam Dawwas as being from Yemen. For Diesbach, chatbots must never be used as tools to verify facts. "They are not made to tell the truth," but to "generate content, whether true or false", he said. "You have to look at it like a friendly pathological liar — it may not always lie, but it always could."

Israel army chief vows to express military stance without fear
Israel army chief vows to express military stance without fear

The Sun

time4 hours ago

  • The Sun

Israel army chief vows to express military stance without fear

JERUSALEM: Israeli military chief Lieutenant General Eyal Zamir vowed to continue expressing the military's position 'without fear' ahead of a key security cabinet meeting. The meeting is expected to discuss war plans for Gaza amid reported disagreements between the cabinet and Zamir. 'We will continue to express our position without fear, in a pragmatic, independent, and professional manner,' Zamir said in a military statement. Israeli media reported the security cabinet would meet later in the day, with Prime Minister Benjamin Netanyahu likely seeking approval for a full Gaza takeover. Defence Minister Israel Katz earlier stated the military must execute any government decision on Gaza, following reports Zamir opposed the plan. 'We are not dealing with theory—we are dealing with matters of life and death, with the defence of the state,' Zamir emphasised. He added the military would act with 'responsibility, integrity, and determination' for Israel's security. Katz affirmed the military must respect government policies while acknowledging Zamir's right to express his stance in appropriate forums. – AFP

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store