logo
Hey chatbot, is this true? AI ‘factchecks' sow misinformation

Hey chatbot, is this true? AI ‘factchecks' sow misinformation

With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots – including xAI's Grok – in search of reliable information. (AFP pic)
WASHINGTON : As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification – only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool.
With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots – including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini – in search of reliable information.
'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media.
But the responses are often themselves riddled with misinformation.
Grok – now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries – wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India.
Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes.
'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP.
'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned.
'Fabricated'
NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election.
In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.'
When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken.
Grok recently labelled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim.
In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real.
Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification.
The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking programme in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularised by X.
Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods.
'Biased answers'
Human fact-checking has long been a flashpoint in a hyperpolarised political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content – something professional fact-checkers vehemently reject.
AFP currently works in 26 languages with Facebook's fact-checking programme, including in Asia, Latin America, and the European Union.
The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control.
Musk's xAI recently blamed an 'unauthorised modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa.
When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit.
Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people.
'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP.
'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorised answers.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Calling for ethical and responsible use of AI
Calling for ethical and responsible use of AI

New Straits Times

time3 hours ago

  • New Straits Times

Calling for ethical and responsible use of AI

LETTERS: In an era where artificial intelligence (AI) is rapidly shaping every facet of human life, it is critical that we ensure this powerful technology is developed and deployed with a human-centric approach. AI holds the potential to solve some of humanity's most pressing challenges, from healthcare innovations to environmental sustainability, but it must always serve the greater good. To humanise AI is to embed ethical considerations, transparency, and empathy into the heart of its design. AI is not just a tool; it reflects the values of those who create it. Therefore, AI development should prioritise fairness, accountability, and inclusivity. This means avoiding bias in decision-making systems, ensuring that AI enhances human potential rather than replacing it, and making its' benefits accessible to all, not just a select few. Governments, industries, and communities must work together to create a governance framework that fosters innovation while protecting privacy and rights. We must also emphasise the importance of educating our workforce and future generations to work alongside AI, harnessing its capabilities while maintaining our uniquely human traits of creativity, compassion, and critical thinking. As AI continues to transform the way we live, work, and interact, it is becoming increasingly urgent to ensure that its development and use are grounded in responsibility, accountability, and integrity. The Alliance for a Safe Community calls for clear, forward-looking regulations and a comprehensive ethical framework to govern AI usage to safeguard the public interest. AI technologies are rapidly being adopted across sectors — from healthcare and education to finance, law enforcement, and public services. While these advancements offer significant benefits, they also pose risks, including: • Invasion of privacy and misuse of personal data; • Algorithmic bias leading to discrimination or injustice; • Job displacement and economic inequality; • Deepfakes and misinformation Without proper regulation, AI could exacerbate existing societal challenges and even introduce new threats. There must be checks and balances to ensure that AI serves humanity and does not compromise safety, security, or fundamental rights. We propose the following elements as part of a robust regulatory framework: 1. AI Accountability Laws – Define legal responsibility for harm caused by AI systems, especially in high-risk applications. 2. Transparency and Explainability – Mandate that AI decisions affecting individuals (e.g., in hiring, credit scoring, or medical diagnoses) must be explainable and transparent. 3. Data Protection and Privacy Standards – Strengthen data governance frameworks to prevent unauthorised access, misuse, or exploitation of personal data by AI systems. 4. Risk Assessment and Certification – Require pre-deployment risk assessments and certification processes for high-impact AI tools. 5. Public Oversight Bodies – Establish independent agencies to oversee compliance, conduct audits, and respond to grievances involving AI. Technology alone cannot determine what is right or just. We must embed ethical principles into every stage of AI development and deployment. A Code of Ethics should include: • Human-Centric Design – AI must prioritise human dignity, autonomy, and well-being. • Non-Discrimination and Fairness – AI systems must not reinforce or amplify social, racial, gender, or economic bias. • Integrity and Honesty – Developers and users must avoid deceptive practices and be truthful about AI capabilities and limitations. • Environmental Responsibility – Developers should consider the energy and environmental impact of AI technologies. • Collaboration and Inclusivity – The development of AI standards must include voices from all segments of society, especially marginalised communities. AI is one of the most powerful tools of our time. Like any powerful tool, it must be handled with care, guided by laws, and shaped by ethical values. We urge policymakers, tech leaders, civil society, and global institutions to come together to build a framework that ensures AI is safe, inclusive, and used in the best interest of humanity. The future of AI should not be one where technology dictates the terms of our humanity. Instead, we must chart a course where AI amplifies our best qualities, helping us to live more fulfilling lives, build fairer societies, and safeguard the well-being of future generations. Only by humanising AI can we ensure that its promise is realised in a way that serves all of mankind.

Trump slams Musk over spending bill row, calls him ‘man who lost his mind'
Trump slams Musk over spending bill row, calls him ‘man who lost his mind'

Malay Mail

time3 hours ago

  • Malay Mail

Trump slams Musk over spending bill row, calls him ‘man who lost his mind'

WASHINGTON, June 7 — US President Donald Trump said yesterday that Elon Musk had 'lost his mind' but insisted he wanted to move on from the fiery split with his billionaire former ally. The blistering public break-up between the world's richest person and the world's most powerful is fraught with political and economic risks all round. Trump had scrapped the idea of a call with Musk and was even thinking of ditching the red Tesla he bought at the height of their bromance, White House officials told AFP. 'Honestly I've been so busy working on China, working on Russia, working on Iran... I'm not thinking about Elon Musk, I just wish him well,' Trump told reporters aboard Air Force One en route to his New Jersey golf club late Friday. Earlier, Trump told US broadcasters that he now wanted to focus instead on passing his 'big, beautiful' mega-bill before Congress—Musk's harsh criticism of which had sparked their break-up. But the 78-year-old Republican could not stop himself from taking aim at his South African-born friend-turned-enemy. 'You mean the man who has lost his mind?' Trump said in a call with ABC when asked about Musk, adding that he was 'not particularly' interested in talking to the tycoon. Trump later told Fox News that Musk had 'lost it.' Just a week ago Trump gave Musk a glowing send-off as he left his cost-cutting role at the so-called Department of Government Efficiency (DOGE) after four months working there. While there had been reports of tensions, the sheer speed at which their relationship imploded stunned Washington. After Musk called Trump's spending bill an 'abomination' on Tuesday, Trump hit back in an Oval Office diatribe on Thursday in which he said he was 'very disappointed' by the entrepreneur. Trump's spending bill faces a difficult path through Congress as it will raise the US deficit, while critics say it will cut health care for millions of the poorest Americans. The row then went nuclear, with Musk slinging insults at Trump and accusing him without evidence of being in government files on disgraced financier and sex offender Jeffrey Epstein. Trump hit back with the power of the US government behind him, saying he could cancel the Space X boss's multi-billion-dollar rocket and satellite contracts. Trump struck a milder tone late Friday when asked how seriously he is considering cutting Musk's contracts. 'It's a lot of money, it's a lot of subsidy, so we'll take a look—only if it's fair. Only if it's to be fair for him and the country,' he said. Musk apparently also tried to de-escalate social media hostilities. The right-wing tech baron rowed back on a threat to scrap his company's Dragon spacecraft—vital for ferrying NASA astronauts to and from the International Space Station. And on Friday the usually garrulous poster kept a low social media profile on his X social network. But the White House denied reports that they would talk. 'The president does not intend to speak to Musk today,' a senior White House official told AFP. A second official said Musk had requested a call. Tesla giveaway? Tesla stocks tanked more than 14 percent on Thursday amid the row, losing some US$100 billion of the company's market value, but recovering partly Friday. Trump is now considering either selling or giving away the cherry red Tesla S that he announced he had bought from Musk's firm in March. The electric vehicle was still parked on the White House grounds yesterday. 'He's thinking about it, yes,' a senior White House official told AFP when asked if Trump would sell or give it away. Trump and Musk had posed inside the car at a bizarre event in March, when the president turned the White House into a pop-up Tesla showroom after viral protests against Musk's DOGE role. But while Trump appeared to hold many of the cards, Musk also has some to play. His wealth allowed him to be the biggest donor to Trump's 2024 campaign, to the tune of nearly US$300 million. Any further support for the 2026 midterm election now appears in doubt—while Musk could also use his money to undermine Trump's support on the right. — AFP

We are sorry, prime minister
We are sorry, prime minister

Free Malaysia Today

time10 hours ago

  • Free Malaysia Today

We are sorry, prime minister

PETALING JAYA : The misuse of video outtakes on social media has become a growing concern recently, with netizens editing clips to malign public personalities, distort narratives and mislead audiences. With content zipping around social media almost instantaneously, an innocent moment can be turned into a controversy almost immediately. Unfortunately, FMT found itself a victim of this recently, with our popular Malay news presenter Nur Hailee Izzati unwittingly putting herself in the spotlight. Hailee had on May 30 posted a seemingly harmless outtake of herself laughing as she struggled to begin reading a news item that made mention of the prime minister. Unfortunately, the video clip was misused by a netizen who edited it to include a disparaging caricature of the prime minister, making it appear as though FMT and Hailee were mocking him. As a result, Hailee was forced to issue an apology, which she readily did. Misused video outtakes can cause grave reputational damage which may be hard to undo. The consequences can also be severe, with both FMT and Hailee receiving instant backlash from all corners, bringing harm to the brand and jeopardising careers. Ultimately, media outlets like FMT and journalists like Hailee must exercise care in what they do both in their professional and personal capacities. Meanwhile, netizens, many of whom are now themselves content creators, must do their part by consuming and using content ethically to maintain proper digital discourse. That is a tough ask, and can only be achieved if everyone plays their part. FMT and Hailee regret the manipulation of one of our unused video clips in a way that has brought ridicule to Prime Minister Anwar Ibrahim. We sincerely apologise to him and to all our readers and viewers.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store