logo
#

Latest news with #HomegrownHate

We broke the story. AI is breaking the internet.
We broke the story. AI is breaking the internet.

Winnipeg Free Press

time6 days ago

  • Politics
  • Winnipeg Free Press

We broke the story. AI is breaking the internet.

In the summer of 2019, Free Press investigative reporter Ryan Thorpe went undercover for a meeting with a local recruiter for a neo-Nazi group at a secluded park in St. Boniface. The man he met while infiltrating a white nationalist hate group turned out to be a Canadian Armed Forces reservist from Beausejour by the name of Patrik Mathews. In short order, Thorpe would uncover a story that would ultimately lead to a Maryland courtroom where Mathews would be sentenced to nine years in prison for what the FBI said was a plan to incite a race war in the United States. But five years later, all the work Thorpe did as part of our Homegrown Hate investigation is no longer part of the record as it's now defined by the 'intelligence' of Google's Gemini artificial intelligence search platform. Instead, this new approach to search for information delivers a summary — minus any links, sources or reason to look any further than the answer Google has just provided — when you type 'Patrik Mathews' into that search window: Patrik Mathews' biography, as summarized by a Google Gemini query. Take note how that summary makes no mention of the Free Press as the source of the investigation or Thorpe as the journalist – including the fact he had to contend with death threats as the FBI learned members of The Base were planning to kill him for exposing Mathews. In this AI-powered future of search, there is no reward for being the originating source: no links for readers to click to read more, and therefore no revenue to pay for that journalism – not even any recognition. Instead, what greets you is the 'answer' by way of an AI overview. That might be good for tech titans like Google who want to keep users looking at their pages. But it's yet another gut punch to content creators like the Free Press who rely on search engines to send readers our way. 'Large language models (LLMS) are harming publishers in two ways,' notes Paul Deegan, president of News Media Canada. 'First, AI companies are flagrantly scraping and summarizing content directly from published news articles via retrieval-augmented generation. Second, because AI overviews are significantly more comprehensive than the snippets associated with traditional links, users may feel they have no reason to click through to the source article on a publisher's website. 'With the user staying within Big Tech's increasingly tall-walled garden, rather than being directed electronically to news websites via links, publishers are deprived of audience, and their ability to sell advertising and subscriptions is significantly diminished. 'No clicks mean no cash for news businesses. Yet, AI companies are selling ads against copyrighted (and often paywalled) content as well as subscriptions for their premium products.' The harm to publishers' bottom lines isn't the worst of the problem with AI summaries. Accuracy is also a problem: a report by BBC earlier this year examining AI chatbots by Google, Microsoft, OpenAI and Perplexity found 'significant inaccuracies' in more than half of their summaries of news stories. That's a study of news-article summaries; you might also have read funny stories about AI search results suggesting people eat a rock every day or put glue on their pizza, or less hilarious stories about it suggesting a mix of vinegar and bleach for home cleaning (a recipe for poisonous chlorine gas). This happens because AI generates text by making plausible predictions based on training data; nobody is checking for truth in its sources or its answers. The scale and scope of AI's errors and distortions is not known — not to audiences and media companies, not to regulators, lawmakers, academics or even the companies responsible for the AI chatbots. And as AI summaries and chats become more integrated into search systems, people rely on them more, so they're less likely to bother to double-check the information AI tells them. The journalism Ryan Thorpe produced for that special investigation — like all the journalism our newsroom publishes every single day — costs money to produce. Not just to pay investigative journalists, but to pay others in the newsroom to verify our work is accurate and objective, and to host the information on websites and apps for audiences to read it… if they can find it. If generative AI continues to crawl 'without consent, credit and compensation,' what does that mean for the long-term future of information we can trust — and for that matter, for human intelligence?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store