logo
#

Latest news with #AI-distorted

Research uncovers troubling flaw in AI chatbots that raises serious concerns: 'Playing with fire'
Research uncovers troubling flaw in AI chatbots that raises serious concerns: 'Playing with fire'

Yahoo

time24-03-2025

  • Business
  • Yahoo

Research uncovers troubling flaw in AI chatbots that raises serious concerns: 'Playing with fire'

Recent research by the BBC showed that AI chatbots from four major companies are unable to accurately summarize or answer questions when prompted with information from specific news sources, amid a backdrop of increasing legal action against AI companies. Four AI chatbots — OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity AI — were given content uploaded from BBC's website and then tasked with answering questions about content in the articles. Given 100 BBC articles, results showed that 51% of AI-generated summaries had significant issues, with 19% introducing new incorrect information. Many of the inaccuracies had misinformation about dates, people, and even misquotes from the articles. Deborah Turness, the CEO of BBC News and Current Affairs, expressed concern over the inaccuracies the test showed. "We live in troubled times, and how long will it be before an AI-distorted headline causes significant real-world harm?" she asked. "The companies developing [generative] AI tools are playing with fire." With the increased use of AI, especially atop results from Google, it is important for companies to improve the tools so that misinformation is not spread to the general public. There have also been adverse environmental effects from companies leveraging AI. Data centers consume massive amounts of water and other resources, and the poor user experience provided by AI services makes this usage wasteful. While Big Tech has made claims of taking initiatives toward clean energy to power the energy needs with less pollution, much of both the current and future power plans involve natural gas power plants, which send heat-retaining gases into the atmosphere that basically act as an unnaturally thick blanket of gas. Outside of its test, the BBC has blocked its articles from being used in AI results in Google searches. The BBC's programme director for generative AI, Pete Archer, said companies "should have control over whether and how their content is used, and AI companies should show how assistants process news along with the scale and scope of errors and inaccuracies they produce." Do you worry about companies having too much of your personal data? Absolutely Sometimes Not really I'm not sure Click your choice to see results and speak your mind. Other companies have followed suit, such as Chegg, the New York Times, Forbes, and News Corp. Chegg, an educational tech company, filed a lawsuit against Google in federal court primarily concerned with copyright issues and lost revenue from AI results negating, or seeming to negate, a searcher's need to open a site to understand the information in its proper context. The others, meanwhile, have sued or threatened to sue Perplexity to stop using their content. Everyday consumers should pay attention to how large companies are leveraging AI while putting forth clean energy initiatives, looking out for corporate greenwashing and prioritizing usage with companies who have environmental concerns on the forefront. While a smarter future is possible by utilizing AI, users should still consider how a greener future might live alongside it. On a personal level, users can aim to lower their usage of AI tools unless the benefits seem to outweigh the costs, and it's worth trying the Google Chrome extension Ecosia, which replaces Google as your default search tool with a modified version of Bing that features zero AI results and even plants trees with the ad revenue it generates. AI has its place in the world, as Dr. Chris Mattman told The Cool Down in a recent interview, but that doesn't mean people don't need to be mindful of their usage. While one individual might not be enough to reverse the danger, making a collaborative effort is the first step in working toward a cleaner future for all. Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet.

Deborah Turness - AI Distortion is new threat to trusted information
Deborah Turness - AI Distortion is new threat to trusted information

BBC News

time11-02-2025

  • BBC News

Deborah Turness - AI Distortion is new threat to trusted information

Disinformation. By now we are all aware of its polarising effects and real world consequences. But how many of us are aware of the new and growing threat to trusted information that's emerging from Generative AI's explosion on the scene? I'm talking about 'distortion'. Distortion is what happens when an AI assistant 'scrapes' information to respond to a question and serves up an answer that's factually incorrect, misleading, and potentially dangerous. Don't get me wrong - AI is the future and brings endless opportunities. Here at BBC News we are already forging ahead with AI tools that will help us deliver more trusted journalism to more consumers in more formats – and on platforms where they need it. And we are in discussions with tech companies around new AI applications that could further enhance and improve our output. But the price of AI's extraordinary benefits must not be a world where people searching for answers are served distorted, defective content that presents itself as fact. In what can feel like a chaotic world, it surely cannot be right that consumers seeking clarity are met with yet more confusion. It's not hard to see how quickly AI's distortion could undermine people's already fragile faith in facts and verified information. We live in troubled times, and how long will it be before an AI-distorted headline causes significant real world harm? The companies developing Gen AI tools are playing with fire. And that's why we at the BBC want to open up a new conversation with AI tech providers and other leading news brands so we can work together in partnership to find solutions. But before this conversation can begin, we needed to find out the scale of the problem with the distortion of news. In the absence of any current research we could find, we made a start - and we hope that regulators who oversee the online space will consider further work in this area. Today we are making public that research, which shows how distortion is affecting the current generation of AI Assistants. Our researchers tested market-leading consumer AI tools – ChatGPT, Perplexity, Microsoft Copilot and Google Gemini - by giving them access to the BBC News website, and asked them to answer one hundred basic questions about the news, prompting them to use BBC News articles as sources. The results? The team found 'significant issues' with just over half of the answers generated by the assistants. The AI assistants introduced clear factual errors into around a fifth of answers they said had come from BBC material. And where AI assistants included 'quotations' from BBC articles, more than one in ten had either been altered, or didn't exist in the article. Part of the problem appears to be that AI assistants do not discern between facts and opinion in news coverage; do not make a distinction between current and archive material; and tend to inject opinions into their answers. The results they deliver can be a confused cocktail of all of these – a world away from the verified facts and clarity that we know consumers crave and deserve. The full research is published on the BBC website but I'll share a couple of examples to illustrate the point. A Perplexity response on the escalation of conflict in the Middle East, giving BBC as its source, said Iran initially showed 'restraint' and described Israel's actions as 'aggressive' – yet those adjectives hadn't been used in the BBC's impartial reporting. In December 2024 Chat GPT told us that Rishi Sunak was still in office; Copilot made a similar error, saying Nicola Sturgeon was. They were not. Gemini misrepresented NHS advice about vaping. Of course, AI software will often include disclaimers about the accuracy of their results, but there is clearly a problem here. Because when it comes to news, we all deserve accurate information we can trust - not a confusing mash-up presented as facts. At least one of the big tech companies is taking this problem seriously. Last month Apple pressed 'pause' on their AI feature that summarises news notifications, after BBC News alerted them to serious issues. The Apple Intelligence feature had hallucinated and distorted BBC News alerts to create wildly inaccurate headlines, alongside the BBC News logo. Where the BBC News alert said LA officials had 'arrested looters' during the city's wildfires, Apple's AI-generated summary said it was LA officials themselves who had been arrested for looting. There were many more examples, but Apple's bold and responsible decision to pull back their AI summaries feature for news alerts shows they recognise the high stakes of distorted news and information. And if Generative AI technology is not yet ready to scrape and serve news without distorting and contorting the facts, isn't it in everyone's interest to do as Apple has done? We'd like other tech companies to hear our concerns, just as Apple did. It's time for us to work together – the news industry, tech companies - and of course government too has a big role to play here. There is a wider conversation to be had around regulation to ensure that in this new version of our online world, consumers can still find clarity through accurate news and information from sources they know they can trust. Earning trust never been more critical. As the CEO of BBC News, that is my number one priority. And this new phenomenon of distortion – an unwelcome sibling to disinformation – threatens to undermine people's ability to trust any information whatsoever. So I'll end with a question: how can we work urgently together to ensure that this nascent technology is designed to help people find trusted information, rather than add to the chaos and confusion? We at the BBC are ready to host the conversation.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store