logo
#

Latest news with #factchecking

BBC Verify Live: What we know about Gaza aid incident
BBC Verify Live: What we know about Gaza aid incident

BBC News

time3 hours ago

  • General
  • BBC News

BBC Verify Live: What we know about Gaza aid incident

Update: Date: 08:51 BST Title: What we're working on today Content: Rob CorpBBC Verify Live editor I've just come out of the BBC Verify editors' meeting and these are the lines the team is working on today: Elsewhere, we're keeping an eye on Ukraine following the weekend's large-scale drone attack, which left 40 Russian bomber aircraft in flames. Update: Date: 08:29 BST Title: Good morning from BBC Verify Live Content: Rob CorpBBC Verify Live editor Welcome to BBC Verify Live - our new live page bringing you updates on our work fact-checking and verifying the sources behind the news across the day. We're going to bring you updates on the stories we are investigating as well as giving insights into the tools and processes we employ to ensure what you see and read is accurate. One of our main tasks is open-source intelligence where we verify social media videos, satellite images and use plane-tracking websites to inform our reporting and show the situation in places that are hard to reach. BBC Verify is also dedicated to examining the facts and claims behind a story to determine whether it is true. And while this means we're always listening out to check what people say is right and fair, we're also keen to hear from you. If you've seen or heard something online - via social media or messaging apps - and think it could be false or misleading, then do get in touch. You can send your suggestions to the team here.

How AI chatbot Grok sowed misinformation during India-Pakistan military conflict
How AI chatbot Grok sowed misinformation during India-Pakistan military conflict

Arab News

time5 hours ago

  • Politics
  • Arab News

How AI chatbot Grok sowed misinformation during India-Pakistan military conflict

WASHINGTON, US: As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification — only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots — including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini — in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok — now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries — wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan air base during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content — something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.'

Hey chatbot, is this true? AI's answer: not really, say fact-checkers
Hey chatbot, is this true? AI's answer: not really, say fact-checkers

Malay Mail

time8 hours ago

  • Politics
  • Malay Mail

Hey chatbot, is this true? AI's answer: not really, say fact-checkers

WASHINGTON, June 2 — As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification — only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots — including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini — in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok — now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries — wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. 'Fabricated' NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Centre for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labelled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. 'Biased answers' Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content — something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.' — AFP

Some newsrooms still struggle with the gap between capability and accountability where AI is concerned
Some newsrooms still struggle with the gap between capability and accountability where AI is concerned

CNN

time5 days ago

  • Business
  • CNN

Some newsrooms still struggle with the gap between capability and accountability where AI is concerned

An inaccurate AI-produced reading list recently published by two newspapers demonstrates just how easy it still is for publishers to circulate AI slop. The Chicago Sun-Times and the Philadelphia Inquirer last week published a summer reading insert produced by King Features, a Hearst Newspapers subsidiary that provides the pair with licensed content. While the insert included real authors, the recommended books were mostly fake. Ultimately, 404 Media found that a human writer had produced the list using ChatGPT and failed to fact-check it. 'I do use AI for background at times but always check out the material first,' the insert's writer told 404 Media. 'This time, I did not and I can't believe I missed it because it's so obvious. No excuses.' Get Reliable Sources newsletter Sign up here to receive Reliable Sources with Brian Stelter in your inbox. OpenAI's launch of ChatGPT more than two years ago kicked off an AI gold rush, resulting in a deluge of AI-infused tools aiming to help people find information online without sifting through lists of links. But that convenience comes at a cost, with AI chatbots continuing to offer incorrect or speculative responses. Newsrooms have adopted AI chatbots with some trepidation, aware that the technology opens up new opportunities, as well as potential high-profile blunders — all amid fears that AI could lead to job losses and eat into news outlets' revenue sources. Not adopting the technology, however, means risking being left behind as others use AI to comb through enormous datasets, incubate ideas and help readers navigate complicated narratives. Though many major newsrooms have adopted AI guidelines since ChatGPT's launch, the sheer size of some newsrooms' staff, coupled with multiple external partnerships, complicates identifying where embarrassing AI blunders can occur. The insert incident exemplifies the myriad ways AI errors can be introduced into news products. Most supplements that the Sun-Times has run this year — from puzzles to how-to guides — have been from Hearst, Tracy Brown, the chief partnerships officer for Sun-Times parent Chicago Public Media, told CNN. However, whether it's an insert or a full-length story, Brown stressed that newsrooms have to use AI carefully. 'It's not that we're saying that you can't use any AI,' she said. 'You have to use it responsibly and you have to do it in a way that keeps your editorial standards and integrity intact.' It's precisely because AI is prone to errors that newsrooms must maintain the 'fundamental standards and values that have long guided their work,' Peter Adams, a senior vice president of research and design at the News Literacy Project, told CNN. That includes being transparent about using AI in the first place. Many high-profile publishers have been candid about how their newsrooms use AI to bolster reporting. The Associated Press — considered by many within the news industry to be the gold standard for journalism practices, given how it has used AI for translation, summaries and headlines — has avoided gaffes by always including a human backstop. Amanda Barrett, the AP's vice president of standards, told CNN that any information gathered using AI tools is considered unvetted source material, and reporters are responsible for verifying AI-produced information. The AP also checks that its third-party partners have similar AI policies. 'It's really about making sure that your standards are compatible with the partner you're working with and that everyone's clear on what the standard is,' Barrett said. Zack Kass, an AI consultant and former OpenAI go-to-market lead, echoed Barrett, telling CNN that newsrooms need to treat AI 'like a junior researcher with unlimited energy and zero credibility.' This means that AI writing should be 'subject to the same scrutiny as a hot tip from an unvetted source.' 'The mistake is using it like it's a search engine instead of what it really is: an improviser with a genius-level memory and no instinct for truth,' he added. High-profile AI mistakes in newsrooms, when they happen, tend to be very embarrassing. Bloomberg News' AI summaries, for example, were announced in January and already have included several errors. The LA Times' Insights AI in March sympathized with the KKK within 24 hours of its launch. And in January, Apple pulled a feature from its Apple Intelligence AI that incorrectly summarized push notifications from news outlets. That's only recently. For years, newsrooms have struggled when AI has been allowed to proceed unchecked. Gannett in 2023 was forced to pause an AI experiment after several major errors in high school sports articles. And CNET in 2023 published several inaccurate stories. Still, as Felix Simon, a research fellow in AI and digital news at the University of Oxford's Reuters Institute for the Study of Journalism, points out, 'the really egregious cases have been few and far between.' New research innovations have reduced hallucinations, or false answers from AI, pushing chatbots to spend more time thinking before responding, Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, told CNN. But they're not infallible, which is how these incidents still occur. 'AI companies need to do a better job communicating to users about the potential for errors, since we have repeatedly seen examples of users misunderstanding how to use technology,' Callison-Burch said. According to Brown, all editorial content at the Sun-Times is produced by humans. Looking forward, the newspaper will ensure that editorial partners, like King Features, uphold those same standards, just as the newspaper already ensures freelancers' codes of ethics mirror its own. But the 'real takeaway,' as Kass put it, isn't just that humans are needed — it's 'why we're needed.' 'Not to clean up after AI, but to do the things AI fundamentally can't,' he said. '(To) make moral calls, challenge power, understand nuance and decide what actually matters.'

Some newsrooms still struggle with the gap between capability and accountability where AI is concerned
Some newsrooms still struggle with the gap between capability and accountability where AI is concerned

CNN

time5 days ago

  • Business
  • CNN

Some newsrooms still struggle with the gap between capability and accountability where AI is concerned

An inaccurate AI-produced reading list recently published by two newspapers demonstrates just how easy it still is for publishers to circulate AI slop. The Chicago Sun-Times and the Philadelphia Inquirer last week published a summer reading insert produced by King Features, a Hearst Newspapers subsidiary that provides the pair with licensed content. While the insert included real authors, the recommended books were mostly fake. Ultimately, 404 Media found that a human writer had produced the list using ChatGPT and failed to fact-check it. 'I do use AI for background at times but always check out the material first,' the insert's writer told 404 Media. 'This time, I did not and I can't believe I missed it because it's so obvious. No excuses.' Get Reliable Sources newsletter Sign up here to receive Reliable Sources with Brian Stelter in your inbox. OpenAI's launch of ChatGPT more than two years ago kicked off an AI gold rush, resulting in a deluge of AI-infused tools aiming to help people find information online without sifting through lists of links. But that convenience comes at a cost, with AI chatbots continuing to offer incorrect or speculative responses. Newsrooms have adopted AI chatbots with some trepidation, aware that the technology opens up new opportunities, as well as potential high-profile blunders — all amid fears that AI could lead to job losses and eat into news outlets' revenue sources. Not adopting the technology, however, means risking being left behind as others use AI to comb through enormous datasets, incubate ideas and help readers navigate complicated narratives. Though many major newsrooms have adopted AI guidelines since ChatGPT's launch, the sheer size of some newsrooms' staff, coupled with multiple external partnerships, complicates identifying where embarrassing AI blunders can occur. The insert incident exemplifies the myriad ways AI errors can be introduced into news products. Most supplements that the Sun-Times has run this year — from puzzles to how-to guides — have been from Hearst, Tracy Brown, the chief partnerships officer for Sun-Times parent Chicago Public Media, told CNN. However, whether it's an insert or a full-length story, Brown stressed that newsrooms have to use AI carefully. 'It's not that we're saying that you can't use any AI,' she said. 'You have to use it responsibly and you have to do it in a way that keeps your editorial standards and integrity intact.' It's precisely because AI is prone to errors that newsrooms must maintain the 'fundamental standards and values that have long guided their work,' Peter Adams, a senior vice president of research and design at the News Literacy Project, told CNN. That includes being transparent about using AI in the first place. Many high-profile publishers have been candid about how their newsrooms use AI to bolster reporting. The Associated Press — considered by many within the news industry to be the gold standard for journalism practices, given how it has used AI for translation, summaries and headlines — has avoided gaffes by always including a human backstop. Amanda Barrett, the AP's vice president of standards, told CNN that any information gathered using AI tools is considered unvetted source material, and reporters are responsible for verifying AI-produced information. The AP also checks that its third-party partners have similar AI policies. 'It's really about making sure that your standards are compatible with the partner you're working with and that everyone's clear on what the standard is,' Barrett said. Zack Kass, an AI consultant and former OpenAI go-to-market lead, echoed Barrett, telling CNN that newsrooms need to treat AI 'like a junior researcher with unlimited energy and zero credibility.' This means that AI writing should be 'subject to the same scrutiny as a hot tip from an unvetted source.' 'The mistake is using it like it's a search engine instead of what it really is: an improviser with a genius-level memory and no instinct for truth,' he added. High-profile AI mistakes in newsrooms, when they happen, tend to be very embarrassing. Bloomberg News' AI summaries, for example, were announced in January and already have included several errors. The LA Times' Insights AI in March sympathized with the KKK within 24 hours of its launch. And in January, Apple pulled a feature from its Apple Intelligence AI that incorrectly summarized push notifications from news outlets. That's only recently. For years, newsrooms have struggled when AI has been allowed to proceed unchecked. Gannett in 2023 was forced to pause an AI experiment after several major errors in high school sports articles. And CNET in 2023 published several inaccurate stories. Still, as Felix Simon, a research fellow in AI and digital news at the University of Oxford's Reuters Institute for the Study of Journalism, points out, 'the really egregious cases have been few and far between.' New research innovations have reduced hallucinations, or false answers from AI, pushing chatbots to spend more time thinking before responding, Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, told CNN. But they're not infallible, which is how these incidents still occur. 'AI companies need to do a better job communicating to users about the potential for errors, since we have repeatedly seen examples of users misunderstanding how to use technology,' Callison-Burch said. According to Brown, all editorial content at the Sun-Times is produced by humans. Looking forward, the newspaper will ensure that editorial partners, like King Features, uphold those same standards, just as the newspaper already ensures freelancers' codes of ethics mirror its own. But the 'real takeaway,' as Kass put it, isn't just that humans are needed — it's 'why we're needed.' 'Not to clean up after AI, but to do the things AI fundamentally can't,' he said. '(To) make moral calls, challenge power, understand nuance and decide what actually matters.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store