logo
#

Latest news with #factcheck

BBC Verify Live: Debunking AI-generated images of Russian airbases
BBC Verify Live: Debunking AI-generated images of Russian airbases

BBC News

time3 days ago

  • Politics
  • BBC News

BBC Verify Live: Debunking AI-generated images of Russian airbases

Update: Date: 09:13 BST Title: Friday on BBC Verify Content: Johanna ChisholmBBC Verify Live editor Hello - welcome back to today's live feed where we're bringing you the latest updates from BBC Verify, including fact-checks, online open-source gathering, disinformation debunking and data journalism. I've just been to our morning meeting and the main prospects for BBC Verify today are: Plus, we'll be monitoring Kemi Badenoch's speech later this morning, where the Conservative leader is expected to announce plans to set up a commission that would examine whether the UK should withdraw from the European Convention on Human Rights (ECHR).

BBC Verify Live: Fact-checking Trump's reasons for new travel bans
BBC Verify Live: Fact-checking Trump's reasons for new travel bans

BBC News

time4 days ago

  • General
  • BBC News

BBC Verify Live: Fact-checking Trump's reasons for new travel bans

Update: Date: 09:07 BST Title: Welcome to BBC Verify Live Content: Rob CorpBBC Verify Live editor Hello - we're here with the latest updates from the BBC's team working on fact-checks, online open-source gathering, disinformation debunking and data journalism. I've been to our morning meeting and the main prospects for BBC Verify today are: Plus we'll be looking out for more details on the impact of Ukraine's audacious drone strikes against Russian airbases - you can read our analysis of what satellite imagery shows us here.

Fact check: How trustworthy are AI fact checks?
Fact check: How trustworthy are AI fact checks?

Times of Oman

time17-05-2025

  • Times of Oman

Fact check: How trustworthy are AI fact checks?

London: "Hey, @Grok, is this true?" Ever since Elon Musk's xAI launched its generative artificial intelligence chatbot Grok in November 2023, and especially since it was rolled out to all non-premium users in December 2024, thousands of X (formerly Twitter) users have been asking this question to carry out rapid fact checks on information they see on the platform. A recent survey carried out by a British online technology publication TechRadar found that 27% of Americans had used artificial intelligence tools such as OpenAI's ChatGPT, Meta's Meta AI, Google's Gemini, Microsoft's Copilot or apps like Perplexity instead of traditional search engines like Google or Yahoo. But how accurate and reliable are the chatbots' responses? Many people have asked themselves this question in the face of Grok's recent statements about "white genocide" in South Africa. Apart from Grok's problematic stance on the topic, X users were also irritated about the fact that the bot started to talk about the issue when it was asked about completely different topics, like in the following example: The discussion around the alleged "white genocide" arose after the Trump administration brought white South Africans to the United States as refugees. Trump said they were facing a "genocide" in their homeland — an allegation that lacks any proof and that many see as related to the racist conspiracy myth of the "Great Replacement". xAI blamed an "unauthorised modification" for Grok's obsession with the "white genocide" topic, and said it had "conducted a thorough investigation." But do flaws like this happen regularly? How sure can users be to get reliable information when they want to fact-check something with AI? Study shows factual errors and altered quotes Two studies conducted this year by the British public broadcaster BBC and the Tow Center for Digital Journalism in the United States found significant shortcomings when it comes to the ability of generative AI chatbots to accurately convey news reporting. In February, the BBC study found that "answers produced by the AI assistants contained significant inaccuracies and distorted content" produced by the organization. When it asked ChatGPT, Copilot, Gemini and Perplexity to respond to questions about current news by using BBC articles as sources, it found that 51% of the chatbots' answers had "significant issues of some form." Nineteen percent of answers were found to have added their own factual errors, while 13% of quotes were either altered or not present at all in cited articles. "AI assistants cannot currently be relied upon to provide accurate news and they risk misleading the audience," said Pete Archer, director of the BBC's Generative AI Program. AI offers incorrect answers with 'alarming confidence' Similarly, research by the Tow Center for Digital Journalism, published in the Columbia Journalism Review (CJR) in March, found that eight generative AI search tools were unable to correctly identify the provenance of article excerpts in 60% of cases. Perplexity performed best with a failure rate of "only" 37%, while Grok answered 94% of queries incorrectly. The CJR said it was particularly concerned by the "alarming confidence" with which AI tools presented incorrect answers. "ChatGPT, for instance, incorrectly identified 134 articles, but signaled a lack of confidence just fifteen times out of its two hundred [total] responses, and never declined to provide an answer," said the report. Overall, the study found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead" and that AI search tools "fabricated links and cited syndicated and copied versions of articles." AI chatbots are only as good as their 'diet' And where does AI itself get its information from? It's fed by different sources like extensive databases and web searches. Depending on how AI chatbots are trained and programmed, the quality and accuracy of their answers can vary. "One issue that recently emerged is the pollution of LLMs [Large Language Models — Editor's note] by Russian disinformation and propaganda. So clearly there is an issue with the 'diet' of LLMs," Tommaso Canetta told DW. He's the deputy director of the Italian fact-checking project Pagella Politica and fact checking coordinator at the European Digital Media Observatory . "If the sources are not trustworthy and qualitative, the answers will most likely be of the same kind," Canetta explained. He said that he regularly comes across responses which are "incomplete, not precise, misleading or even false." In the case of xAI and Grok, whose owner, Elon Musk, is a fierce supporter of US President Donald Trump, there is a clear danger that the "diet" could be politically controlled, he added. When AI gets it all wrong In April 2024, Meta AI reportedly posted in a New York parenting group on Facebook that it had a disabled yet academically gifted child and offered advice on special schooling. Eventually, the chatbot apologised and admitted that it didn't have "personal experiences or children," as Meta told 404media, which reported on the incident: "This is new technology and it may not always return the response we intend, which is the same for all generative AI systems. Since we launched, we've constantly released updates and improvements to our models and we're continuing to work on making them better," a spokesperson said in a statement. In the same month, Grok misinterpreted a viral joke about a poorly-performing basketball player and told users in its trending section that he was under investigation by police after being accused of vandalizing homes with bricks in Sacramento, California. Grok had misunderstood the common basketball expression whereby a player who has failed to get any of their throws on target is said to have been "throwing bricks." Other mistakes have been less amusing. In August 2024, Grok spread misinformation regarding the deadline for US presidential nominees to be added to ballots in nine federal states following the withdrawal of former President Joe Biden from the race. In a public letter to Musk, the secretary of state for Minnesota, Steve Simon, wrote that, within hours of Biden's announcement, Grok had generated false headlines that Vice President Kamala Harris would be ineligible to appear on the ballot in multiple states — untrue information. Grok assigns same AI image to various real events It's not just news that AI chatbots appear to have difficulties with; they also exhibit severe limitations when it comes to identifying AI-generated images. In a quick experiment, DW asked Grok to identify the date, location and origin of an AI-generated image of a fire at a destroyed aircraft hangar taken from a TikTok video. In its response and explanations, Grok claimed that the image showed several different incidents at several different locations, ranging from a small airfield in Salisbury in England, to Denver International Airport in Colorado, to Tan Son Nhat International Airport in Ho Chi Minh City, Vietnam. There have indeed been accidents and fires at these locations in recent years, but the image in question showed none of them. DW strongly believes it was generated by artificial intelligence, which Grok seemed unable to recognise, despite clear errors and inconsistencies in the image — including inverted tail fins on airplanes and illogical jets of water from fire hoses. Even more concerningly, Grok recognized part of the "TikTok" watermark visible in the corner of the image and suggested that this "supported its authenticity." Conversely, under its "More details" tab, Grok stated TikTok was "a platform often used for rapid dissemination of viral content, which can lead to misinformation if not properly verified." Similarly, just this week, Grok informed X users (in Portuguese) that a viral video purporting to show a huge anaconda in the Amazon, seemingly measuring several hundred meters (over 500 feet) in length, was real — despite it clearly having been generated by artificial intelligence, and Grok even recognizing a ChatGPT watermark. A screenshot from a video allegedly showing a guhe anaconda swimming in the Amazon River. It is marked as "fake" and "AI-generated" AI chatbots 'should not be seen as fact-checking tools' AI chatbots may appear as an omniscient entity, but they are not. They make mistakes, misunderstand things and can even be manipulated. Felix Simon, postdoctoral research fellow in AI and digital news and research associate at the Oxford Internet Institute (OII), concludes: "AI systems such as Grok, Meta AI or ChatGPT should not be seen as fact-checking tools. While they can be used to that end with some success, it is unclear how well and consistently they perform at this task, especially for edge cases." For Canetta at Pagella Politica, AI chatbots can be useful for very simple fact checks. But he also advises people not to trust them entirely. Both experts stressed that users should always double-check responses with other sources.

Newsom debuts rapid-response website as critics accuse him of prioritizing presidential ambitions
Newsom debuts rapid-response website as critics accuse him of prioritizing presidential ambitions

Fox News

time09-05-2025

  • Politics
  • Fox News

Newsom debuts rapid-response website as critics accuse him of prioritizing presidential ambitions

Gov. Gavin Newsom, D-Calif., a potential 2028 presidential candidate, launched a new fact-check website on Wednesday targeting "right-wing misinformation," as a new poll found more than half of California registered voters believe he is more focused on becoming president than delivering for Californians. Newsom's campaign apparatus described the new fact-check website as a "rapid-response website to set the record straight about the Golden State" and President Donald Trump. A new survey conducted by UC Berkeley's Institute of Governmental Studies and co-sponsored by the Los Angeles Times found that 54% of California registered voters believe Newsom is more focused on his personal presidential ambitions than solving the ongoing problems at home in the Golden State. "By a more than a two-to-one margin (54% to 26%), most voters believe that as Newsom serves out his final two years as governor, he is devoting more of his attention to things that might benefit himself as a possible candidate for president than to governing the state and helping to solve its problems," according to the results of the poll completed April 21-28 among 6,201 registered voters in California. Newsom's campaign, in a press release announcing the new fact-checking endeavor, touted California as the fourth-largest economy in the world, and said the blue state's population is growing, crime rates are at historic lows and California is leading the nation in clean energy and tech innovation. "This site is for everyone sick of the BS about California. We're done letting the MAGA trolls define the Golden State. We're going on the offense and fighting back — with facts," Newsom said. Newsom's latest endeavor targets misinformation from the devastating Los Angeles fires this year and other state issues, including crime, climate, the economy, immigration, energy and housing. California is often ridiculed by Republicans as a representation of the demise of Democratic states. Such was the case last month when a California lawmaker proposed a bill to allow state college and university students to sleep in their cars amid the blue state's housing crisis. The California politician has long been rumored to harbor presidential ambitions. He was one of several names floated as a potential Democratic nominee replacement for President Joe Biden before Biden suspended his re-election campaign last summer and ultimately chose former Vice President Kamala Harris as his successor. Newsom also campaigned for Biden and Harris in key battleground states, acting as a surrogate for both candidates when their names were on the top of the ticket. Newsom launched a podcast this year embracing political dialogue across party lines, following Democrats losing the White House and the Senate and failing to regain the House of Representatives in 2024. Newsom has invited Trump allies and conservative guests, including Charlie Kirk and Steve Bannon, onto his podcast in an attempt to show he is open to "criticism and debate without demeaning or dehumanizing one another." The strategy follows criticism after the 2024 election that Democrats didn't prioritize new media appearances and unscripted conversations enough. But Newsom's willingness to soak up the national spotlight as Democrats look for someone to lead the party into the next generation may have backfired among his California constituents. Newsom is one of several Democratic governors trying to balance diplomacy with rejecting Trump's agenda during the president's second term. Newsom has spoken out against Trump's tariff policies and executive orders while reaching across the aisle to secure disaster relief following the Los Angeles fires earlier this year. The poll found a majority of California voters believe the state would be negatively impacted by Trump's overhaul of the federal government, as 64% believe Trump's tariffs would negatively impact California business and agriculture and over 50% believe Trump's policies would have a negative impact on education. Meanwhile, Californians are split on Newsom's job performance, with 46% both approving and disapproving of his job as governor and 45% reporting they are not confident in "his ability to be effective in looking out for California's interests when dealing with the Trump administration." "The Governor is focused on one thing: his job — driving L.A.'s recovery, confronting the housing crisis, and taking Donald Trump to court over his disastrous tariffs that are raising costs for families and blowing a hole in California's budget," Newsom's office told Fox News in response to the new polling.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store