
Grok shows 'flaws' in fact-checking Israel-Iran war: study
With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots -- including xAI's Grok -- in search of reliable information, but their responses are often themselves prone to misinformation.
"The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank.
"Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims."
The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media."
Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found.
It oscillated -- sometimes within the same minute -- between denying the airport's destruction and confirming it had been damaged by strikes, the study said.
In some responses, Grok cited the a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran.
When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said.
The Israel-Iran conflict, which led to US air strikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts.
AI chatbots also amplified falsehoods.
As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support.
When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard.
Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles.
Last month, Grok was under renewed scrutiny for inserting "white genocide" in South Africa, a far-right conspiracy theory, into unrelated queries.
Musk's startup xAI blamed an "unauthorized modification" for the unsolicited response.
Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people.
Musk himself blasted Grok after it cited Media Matters -- a liberal media watchdog he has targeted in multiple lawsuits -- as a source in some of its responses about misinformation.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Euronews
8 hours ago
- Euronews
Which AI chatbot is the best at protecting your privacy?
Mistral AI's Le Chat is the least privacy-invasive generative artificial intelligence model when it comes to data privacy, a new analysis has found. Incogni, a personal information removal service, used a set of 11 criteria to assess the various privacy risks with large language models (LLMs), including OpenAI's ChatGPT, Meta AI, Google's Gemini, Microsoft's Copilot, xAI's Grok, Anthropic's Claude, Inflection AI's Pi AI and China-based DeepSeek. Each platform was then scored from zero, being the most privacy-friendly to one, being the least-friendly on that list of criteria. The research aimed to identify how the models are trained, their transparency, and how data is collected and shared. Among the criteria, the study looked at the data set used by the models, whether user-generated prompts could be used for training and what data, if any, could be shared with third parties. What sets Mistral AI apart? The analysis showed that French company Mistral AI's so-called Le Chat model is the least privacy-invasive platform because it collects 'limited' personal data and does well on AI-specific privacy concerns. Le Chat is also one of the few AI assistant chatbots in the study that would only provide user-generated prompts to its service providers, along with Pi AI. OpenAI's ChatGPT comes second in the overall ranking because the company has a 'clear' privacy policy that explains to users exactly where their data is going. However, the researchers noted some concerns about how the models are trained and how user data 'interacts with the platform's offerings'. xAI, the company run by billionaire Elon Musk that operates Grok, came in third place because of transparency concerns and the amount of data collected. Meanwhile, Anthropic's Claude model performed similarly to xAI but had more concerns about how models interact with user data, the study said. At the bottom of the ranking is Meta AI, which was the most privacy invasive, followed by Gemini and Copilot. Many of the companies at the bottom of the ranking don't seem to let users opt out of having prompts that they generated used to further train their models, the analysis said.


France 24
9 hours ago
- France 24
Tesla sales skid in Europe in May despite EV rebound
Sales of battery-electric vehicles jumped by 25 percent in Europe in May compared to the same month last year, according to the ACEA, the trade association of European car manufacturers. Tesla, meanwhile, sold 40.2 percent fewer cars in May. The drop in demand for Tesla cars has been linked to its ageing fleet, competition from European and Chinese rivals, and consumer distaste for Musk's work in the Trump administration. Musk left his role as the US government's cost-cutter at the end of May and had a public falling-out with Trump earlier this month over the US president's spending bill. During the first five months of 2025, Tesla sales fell 45.2 percent from the same period last year. The US company's market share of Europe's total automobile market has fallen to 1.1 percent from two percent last year. Tesla's slump comes as EV sales in Europe rebounded by 26.1 percent in the first five months of the year. Battery-electric cars accounted for 15.4 percent of all cars sold in May, up from 12.1 percent in the same month last year. The EV market share is "still far from where it needs and was expected to be", said ACEA chief Sigrid de Vries. The EU aims to end sales of new internal combustion engine cars in 2035, but high prices and a perceived lack of charging infrastructure have given consumers pause. "Consumer reluctance is by no means a myth, and we need to incentivise a supportive ecosystem -- from charging infrastructure to fiscal incentives -- to ensure the uptake of battery-electric models can meaningfully accelerate," added de Vries. Overall, car sales rose by 1.6 percent in Europe last month, but were down by 0.6 percent in the first five months of the year.


France 24
19 hours ago
- France 24
Grok shows 'flaws' in fact-checking Israel-Iran war: study
With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots -- including xAI's Grok -- in search of reliable information, but their responses are often themselves prone to misinformation. "The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank. "Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims." The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media." Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found. It oscillated -- sometimes within the same minute -- between denying the airport's destruction and confirming it had been damaged by strikes, the study said. In some responses, Grok cited the a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran. When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said. The Israel-Iran conflict, which led to US air strikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts. AI chatbots also amplified falsehoods. As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support. When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard. Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles. Last month, Grok was under renewed scrutiny for inserting "white genocide" in South Africa, a far-right conspiracy theory, into unrelated queries. Musk's startup xAI blamed an "unauthorized modification" for the unsolicited response. Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. Musk himself blasted Grok after it cited Media Matters -- a liberal media watchdog he has targeted in multiple lawsuits -- as a source in some of its responses about misinformation.