logo
Grok Shows 'Flaws' In Fact-checking Israel-Iran War: Study

Grok Shows 'Flaws' In Fact-checking Israel-Iran War: Study

Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said Tuesday, raising fresh doubts about its reliability as a debunking tool.
With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots -- including xAI's Grok -- in search of reliable information, but their responses are often themselves prone to misinformation.
"The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank.
"Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims."
The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media."
Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found.
It oscillated -- sometimes within the same minute -- between denying the airport's destruction and confirming it had been damaged by strikes, the study said.
In some responses, Grok cited the a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran.
When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said.
The Israel-Iran conflict, which led to US air strikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts.
AI chatbots also amplified falsehoods.
As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support.
When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard.
Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles.
Last month, Grok was under renewed scrutiny for inserting "white genocide" in South Africa, a far-right conspiracy theory, into unrelated queries.
Musk's startup xAI blamed an "unauthorized modification" for the unsolicited response.
Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people.
Musk himself blasted Grok after it cited Media Matters -- a liberal media watchdog he has targeted in multiple lawsuits -- as a source in some of its responses about misinformation.
"Shame on you, Grok," Musk wrote on X. "Your sourcing is terrible."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Grok Shows 'Flaws' In Fact-checking Israel-Iran War: Study
Grok Shows 'Flaws' In Fact-checking Israel-Iran War: Study

Int'l Business Times

time10 hours ago

  • Int'l Business Times

Grok Shows 'Flaws' In Fact-checking Israel-Iran War: Study

Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said Tuesday, raising fresh doubts about its reliability as a debunking tool. With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots -- including xAI's Grok -- in search of reliable information, but their responses are often themselves prone to misinformation. "The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank. "Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims." The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media." Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found. It oscillated -- sometimes within the same minute -- between denying the airport's destruction and confirming it had been damaged by strikes, the study said. In some responses, Grok cited the a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran. When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said. The Israel-Iran conflict, which led to US air strikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts. AI chatbots also amplified falsehoods. As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support. When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard. Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles. Last month, Grok was under renewed scrutiny for inserting "white genocide" in South Africa, a far-right conspiracy theory, into unrelated queries. Musk's startup xAI blamed an "unauthorized modification" for the unsolicited response. Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. Musk himself blasted Grok after it cited Media Matters -- a liberal media watchdog he has targeted in multiple lawsuits -- as a source in some of its responses about misinformation. "Shame on you, Grok," Musk wrote on X. "Your sourcing is terrible."

Tesla Ordered to Stop 'Deceptive Practices' on Cars' Self-Driving Capabilities in France or Face Thousands in Fines
Tesla Ordered to Stop 'Deceptive Practices' on Cars' Self-Driving Capabilities in France or Face Thousands in Fines

Int'l Business Times

time12 hours ago

  • Int'l Business Times

Tesla Ordered to Stop 'Deceptive Practices' on Cars' Self-Driving Capabilities in France or Face Thousands in Fines

France has ordered Tesla to stop "deceptive practices" related to the marketing of its vehicles' self-driving features, warning that Elon Musk's company could face daily fines exceeding $58,000 until it complies. Following investigations conducted in 2023 and 2024, France's Directorate General for Competition, Consumer Affairs and Fraud Control (DGCCRF) concluded that Tesla engaged in "deceptive commercial practices" by falsely advertising its vehicles as "fully autonomous." In reality, the vehicles require a human to be in the driver's seat, paying full attention to the road, the Financial Times reported. The inquiry also found that Tesla signed sales contracts without specifying dates, times or delivery locations, and routinely failed to issue refunds or receipts in a timely manner. The DGCCRF gave Tesla four months to comply with the order. It said that the "particular seriousness" of the misleading practices over autonomous vehicles meant that Tesla would face the hefty fine for each day that it did not conform with demands to stop misleading customers. France's investigation marks the latest setback for Musk as Tesla's profits in the country have sharply declined following his alignment with the Trump administration. In May, Tesla's sales in the European nation plunged 67% compared to the same month in 2024, with new vehicle registrations falling to their lowest level since July 2022. Originally published on Latin Times

AI Might be Writing the Code, But Who's Reading It? Jonathan Corrales Calls for Education Reform
AI Might be Writing the Code, But Who's Reading It? Jonathan Corrales Calls for Education Reform

Int'l Business Times

time2 days ago

  • Int'l Business Times

AI Might be Writing the Code, But Who's Reading It? Jonathan Corrales Calls for Education Reform

There's no doubt that artificial intelligence can generate thousands of lines of code in seconds. But that raises the question as to what it actually means to be a software engineer. For Jonathan Corrales, the founder of Ready Aim Interview , a communications coaching firm for tech professionals, the answer is clear: engineers must become expert code readers, not just coding writers. And until computer science education reflects that, students will continue to graduate unprepared for the reality of today's job market. "Computer science programs are still largely teaching people how to write code," says Corrales. "But in the age of AI, reading code is what's going to matter most." Corrales, a 20-year tech industry veteran and former hiring manager who's interviewed many candidates, believes education is falling dangerously behind industry needs. With AI changing the nature of software development, the skills required to succeed are evolving, but classrooms, he warns, are still stuck in the 2000s. "We're flooding the workforce with graduates who know how to write for the test, but not how to think like an engineer, like a creator," he shares. The fallout is already visible. According to recent U.S. labor data, computer engineering and computer science majors rank among the top 10 degrees with the highest unemployment rates, with computer science sitting at 6.1% . Corrales warns, "This should be a cause for concern. Computer science promises high salaries and stability, but many graduates can't land their first role." The biggest reason for this is that entry-level jobs are slowly disappearing. Employers expect candidates to arrive with experience navigating today's complex systems, managing large codebases, and collaborating in real-world environments. But most degree programs still revolve around isolated, individual coding assignments, far removed from industry conditions. "There's a growing disconnect between what students learn and what they're expected to do on day one of a job," says Corrales. "And that gap is only widening with the rise of AI." Ready Aim Interview The quote that inspired Corrales' thinking on this comes from a famous American computer scientist who spoke along the lines of 'reading is more important than writing.' "That quote stuck with me," he reflects. "Now, more than ever, it applies to code. AI is writing at scale. But it's still up to humans to read, interpret, and fix that code when it inevitably breaks." Corrales cites real-world examples: AI-generated code that works, until it doesn't. Massive software systems like Linux, which spans 30 million lines of code. "Imagine reading through 20,000 lines of text every day just to find a bug or a bottleneck," Corrales says. "That's the new reality." To prepare engineers for that reality, Corrales believes computer science programs must overhaul their approach. His recommendation: do teach syntax, but move quickly to reading real-world code. Use open-source libraries. Challenge students to fix bugs or enhance features. Corrales further suggests replacing one-off assignments with group projects. "Real jobs require collaboration, version control, and communication; skills students rarely practice," he states. Incorporating AI as a tool could also help aid, not just as a topic, but to create sample applications with intentional errors, asking students to debug and improve them. Shift the focus from creating to maintaining because, according to Corrales, it's about understanding what exists and making it better. He even draws an analogy from literature. "If you wanted to become a writer, you'd read books and compare how authors handle conflict, pacing, or character. Engineers should be doing the same thing with repositories." While Ready Aim Interview isn't an educational institution, Corrales uses his platform to help job seekers navigate this shifting terrain. His clients, many recently laid off, some just entering the workforce, often arrive feeling lost. "They come to me deflated. They've spent years and thousands of dollars on a degree, and they still don't know how to get hired," he says. What Corrales offers is guidance: helping them understand system design, refine interview strategies, and, most critically, reframe how they think about their value. "I tell them: you have to be sharper than ever before. Not just in your skills, but in how you present those skills. Because the jobs are fewer, the bar is higher, and the expectations are shifting." And that's exactly why he's calling for a change in education. Because while individuals can self-study, pursue mentorship, or practice endlessly, there's no substitute for a system that prepares students for the field they're entering. "When I was a student, it took me a year to find my first job," Corrales recalls. "And at that time, I wasn't even competing with AI, or global talent, or a pandemic-sized economic shock. Today's students are. And if their education isn't built for that world, we're setting them up to fail." The future of software development may be faster, but it's also messier, more complex, and more collaborative than ever before. Yes, AI can generate software. But software still needs to be debugged, secured, and maintained. That's human work that needs to be trained right at the outset. In the meantime, Corrales advises students to take ownership of their learning: Your education might not teach you everything. But that doesn't mean you can't learn it. You just have to know where to look and be willing to sharpen your tools every step of the way." Jonathan Corrales, founder of Ready Aim Interview

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store