logo
#

Latest news with #Modallala

Grok, is that Gaza? AI image checks mislocate news photographs
Grok, is that Gaza? AI image checks mislocate news photographs

Daily Tribune

time2 days ago

  • Politics
  • Daily Tribune

Grok, is that Gaza? AI image checks mislocate news photographs

AFP | Paris This image by AFP photojournalist Omar al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel's blockade has fuelled fears of mass famine in the Palestinian territory. But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago. The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. At a time when internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free. Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018. In fact the photo shows nineyear-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025. Before the war, sparked by Hamas's October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP. Today, she weighs only nine. The only nutrition she gets to help her condition is milk, Modallala told AFP -- and even that's "not always available". Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources." The chatbot eventually issued a response that recognised the error -- but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen. The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate. Radical right bias Grok's mistakes illustrate the limits of AI tools, whose functions are as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics. "We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, "Hello ChatGPT". Each AI has biases linked to the information it was trained on and the instructions of its creators, he said. In the researcher's view Grok, made by Musk's xAI start-up, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right. Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach. "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'." AI does not necessarily seek accuracy -- "that's not the goal," the expert said. Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016. That error led to internet users accusing the French newspaper Liberation, which had published the photo, of manipulation. 'Friendly pathological liar' An AI's bias is linked to the data it is fed and what happens during fine-tuning -- the socalled alignment phase -- which then determines what the model would rate as a good or bad answer. "Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," Diesbach said. "Its training data has not changed and neither has its alignment." Grok is not alone in wrongly identifying images. When AFP asked Mistral AI's Le Chat -- which is in part trained on AFP's articles under an agreement between the French start-up and the news agency -- the bot also misidentified the photo of Mariam Dawwas as being from Yemen. For Diesbach, chatbots must never be used as tools to verify facts. "They are not made to tell the truth," but to "generate content, whether true or false", he said.

Gaza children face battle for survival: 10 disturbing photos of horrific hunger crisis and misery
Gaza children face battle for survival: 10 disturbing photos of horrific hunger crisis and misery

Mint

time4 days ago

  • Health
  • Mint

Gaza children face battle for survival: 10 disturbing photos of horrific hunger crisis and misery

Gaza children face battle for survival: 10 disturbing photos of horrific hunger crisis and misery 10 Photos . Updated: 08 Aug 2025, 09:34 PM IST Share Via As many as 12,000 children in Gaza are suffering from acute malnutrition, and hunger-related deaths are rising, according to the World Health Organization. Many kids lost over half their body weight amid war, as Israel blocked food entirely from entering Gaza for 2 ½ months starting in March. 1/10Palestinians reach for food and aid from the back of a moving truck along the Morag corridor near Rafah, southern Gaza Strip, on August 4. (AP) 2/10Crowds gather at a community kitchen in Gaza City, northern Gaza Strip, on August 4, struggling to get donated food. (AP) 3/10People push forward to receive meals at a community kitchen in northern Gaza City, on August 4, amid ongoing shortages. (AP) 4/10Palestinians line up at a community kitchen in Gaza City, as demand for food aid continues to rise. (AP) 5/10A Palestinian girl gestures while waiting for food from a charity kitchen in Khan Younis, southern Gaza Strip, on August 4, amid a worsening hunger crisis. (REUTERS) 6/10A displaced Palestinian girl reacts as she receives lentil soup at a food distribution point in Gaza City, northern Gaza Strip. Aid groups warn of a sharp rise in malnourished children in war-hit Gaza. (AFP) 7/10A young displaced Palestinian girl takes a sip of lentil soup at a Gaza City food distribution point on July 25, 2025. Humanitarian agencies caution that hunger is spreading rapidly. (AFP) 8/10A displaced Palestinian girl covers her head with a pot to shield herself from the scorching sun while waiting at a food distribution point in Gaza City on July 25, 2025. Aid organisations report surging malnutrition rates among children. (AFP) 9/10Nine-year-old Mariam Dawwas, malnourished, sits with her mother on the floor in Rimal, Gaza City, on August 2. Her mother, Modallala, 33, living in a northern Gaza displacement camp, said Mariam had no illness, weighed 25 kg before the war, and now weighs 10 kg. The WHO warned on July 27 that malnutrition in Gaza had reached 'alarming levels. (AFP)

Trusting AI? Grok's Gaza photo error says not yet
Trusting AI? Grok's Gaza photo error says not yet

New Straits Times

time5 days ago

  • Politics
  • New Straits Times

Trusting AI? Grok's Gaza photo error says not yet

AN image by AFP photojournalist Omar al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel's blockade has fuelled fears of mass famine in the Palestinian territory. But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence (AI) chatbot was certain that the photograph was taken in Yemen nearly seven years ago. The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. At a time when Internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free. Grok said the photo showed Amal Hussain, a 7-year-old Yemeni child, in October 2018. In fact the photo shows 9-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on Aug 2, 2025. Before the war, sparked by Hamas's Oct 7, 2023 attack on Israel, Mariam weighed 25kg, said her mother. Today, she weighs only 9kg. The only nutrition she gets to help her condition is milk, said Modallala — and even that's "not always available". Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources." The chatbot eventually issued a response that recognised the error — but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen. The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate. Grok's mistakes illustrated the limits of AI tools, whose functions were as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics. "We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, Hello ChatGPT. Each AI had biases linked to the information it was trained on and the instructions of its creators, he said. In the researcher's view Grok, made by Musk's xAI startup, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of United States President Donald Trump and a standard-bearer for the radical right. Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach. "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'." AI did not necessarily seek accuracy — "that's not the goal", said the expert. Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016. That error led to Internet users accusing the French newspaper Liberation, which had published the photo, of manipulation. An AI's bias is linked to the data it is fed and what happens during fine-tuning — the so-called alignment phase — which then determines what the model would rate as a good or bad answer. "Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," said Diesbach. "Its training data has not changed and neither has its alignment." Grok is not alone in wrongly identifying images. When AFP asked Mistral AI's Le Chat — which is in part trained on AFP's articles under an agreement between the French startup and the news agency — the bot also misidentified the photo of Mariam Dawwas as being from Yemen. For Diesbach, chatbots must never be used as tools to verify facts. "They are not made to tell the truth," but to "generate content, whether true or false", he said.

When AI image checks mislocate news photos
When AI image checks mislocate news photos

New Straits Times

time5 days ago

  • Politics
  • New Straits Times

When AI image checks mislocate news photos

AN image by AFP photojournalist Omar al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel's blockade has fuelled fears of mass famine in the Palestinian territory. But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence (AI) chatbot was certain that the photograph was taken in Yemen nearly seven years ago. The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. At a time when Internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free. Grok said the photo showed Amal Hussain, a 7-year-old Yemeni child, in October 2018. In fact the photo shows 9-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on Aug 2, 2025. Before the war, sparked by Hamas's Oct 7, 2023 attack on Israel, Mariam weighed 25kg, said her mother. Today, she weighs only 9kg. The only nutrition she gets to help her condition is milk, said Modallala — and even that's "not always available". Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources." The chatbot eventually issued a response that recognised the error — but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen. The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate. Grok's mistakes illustrated the limits of AI tools, whose functions were as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics. "We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, Hello ChatGPT. Each AI had biases linked to the information it was trained on and the instructions of its creators, he said. In the researcher's view Grok, made by Musk's xAI startup, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of United States President Donald Trump and a standard-bearer for the radical right. Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach. "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'." AI did not necessarily seek accuracy — "that's not the goal", said the expert. Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016. That error led to Internet users accusing the French newspaper Liberation, which had published the photo, of manipulation. An AI's bias is linked to the data it is fed and what happens during fine-tuning — the so-called alignment phase — which then determines what the model would rate as a good or bad answer. "Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," said Diesbach. "Its training data has not changed and neither has its alignment." Grok is not alone in wrongly identifying images. When AFP asked Mistral AI's Le Chat — which is in part trained on AFP's articles under an agreement between the French startup and the news agency — the bot also misidentified the photo of Mariam Dawwas as being from Yemen. For Diesbach, chatbots must never be used as tools to verify facts. "They are not made to tell the truth," but to "generate content, whether true or false", he said. "You have to look at it like a friendly pathological liar — it may not always lie, but it always could."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store