
Rising AI search pushing media ecosystem to edge
Generative artificial intelligence assistants such as ChatGPT are reducing traffic to traditional online search engines, leading to fewer visitors for news websites and negatively affecting the advertising revenue they rely on, delivering a severe setback to an industry already struggling to stay afloat.
'The next three or four years will be incredibly challenging for publishers everywhere. No one is immune from the AI summaries storm gathering on the horizon,' warned Matt Karolian, vice president of research and development at Boston Globe Media.
'Publishers need to build their own shelters or risk being swept away.' While data remains limited, a recent Pew Research Center study reveals that AI-generated summaries now appearing regularly in Google searches discourage users from clicking through to source articles.
When AI summaries are present, users click on suggested links half as often as in traditional searches.
This represents a devastating loss of visitors for online media sites that depend on traffic for both advertising revenue and subscription conversions.
According to Northeastern University professor John Wihbey, these trends 'will accelerate and pretty soon we will have an entirely different web.'
The dominance of tech giants like Google and Meta had already slashed online media advertising revenue, forcing publishers to pivot toward paid subscriptions.
But Wihbey noted that subscriptions also depend on traffic, and paying subscribers alone aren't sufficient to support major media organizations.
The Boston Globe group has begun seeing subscribers sign up through ChatGPT, offering a new touchpoint with potential readers, Karolian said.
However, 'these remain incredibly modest compared to other platforms, including even smaller search engines.' Other AI-powered tools like Perplexity are generating even fewer new subscriptions, he added.
To survive what many see as an inevitable shift, media companies are increasingly adopting GEO (Generative Engine Optimization) – a technique that replaces traditional SEO (Search Engine Optimization).
This involves providing AI models with clearly labeled content, well-structured text, and a strong presence on social networks and forums, such as Reddit, that AI companies crawl.
But a fundamental question remains: 'Should you allow OpenAI crawlers to crawl your website and your content basically?' asks Thomas Peham, CEO of optimization startup OtterlyAI.
Burned by aggressive data collection from major AI companies, many news publishers have chosen to fight back by blocking AI crawlers from accessing their content.
'We just need to ensure that companies using our content are paying fair market value,' argued Danielle Coffey, who heads the News/Media Alliance trade organization.
Some progress has been made on this front. Licensing agreements have emerged between major players, such as The New York Times and Amazon, Google and the Associated Press (AP), and Mistral and Agence France-Presse (AFP), among others.
But the issue is far from resolved, as several major legal battles are underway, most notably the New York Times' blockbuster lawsuit against OpenAI and Microsoft.
Publishers face a dilemma: blocking AI crawlers protects their content, but it also reduces exposure to potential new readers.
Faced with this challenge, 'media leaders are increasingly choosing to reopen access,' Peham observed.
Yet even with open access, success isn't guaranteed.
According to OtterlyAI data, media outlets account for just 29% of the citations provided by ChatGPT, trailing corporate websites at 36%.
And while Google search has traditionally privileged sources recognized as reliable, 'we don't see this with ChatGPT,' Peham noted.
The stakes extend beyond business models.
According to the Reuters Institute's 2025 Digital News Report, about 15% of people under 25 now use generative AI to get their news.
Given ongoing questions about AI sourcing and reliability, this trend risks confusing readers about the origins and credibility of information – much like social media did before it.
'At some point, someone has to do the reporting,' Karolian said. 'Without original journalism, none of these AI platforms would have anything to summarize.' Perhaps with this in mind, Google is already developing partnerships with news organizations to feed its generative AI features, suggesting potential paths forward.
'I think the platforms will realize how much they need the press,' predicted Wihbey – though whether that realization comes soon enough to save struggling newsrooms remains an open question.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Qatar Tribune
3 hours ago
- Qatar Tribune
OpenAI readies for GPT-5 release as it launches 2 ‘open' models
Agencies OpenAI is likely just days away from the launch of GPT-5, the latest installment of the artificial intelligence technology that powered the ChatGPT juggernaut in 2022 and users will be able to scrutinize if the step up from GPT-4 is on par with the research lab's previous improvements. The AI giant, which claimed earlier this week it is 'on track' to 700 million weekly active users, also unveiled on Tuesday two new 'open' models that can be customized and run locally on a laptop. Dubbed 'gpt-oss-120b' and 'gpt-oss-20b,' the two models are so-called 'open-weight language models' that allow users to access the AI's parameters and customize them, in a departure from the company's ChatGPT bot, which is based on a 'closed' model. 'Developed with open-source community feedback, these models deliver meaningful advancements in both reasoning capabilities & safety,' OpenAI said in a statement on X. Parallel with this, media reports indicated that GPT-5 is coming 'early' in August, with the company CEO, Sam Altman, also saying in July that they would be releasing it 'soon.' 'We are releasing GPT-5 soon but want to set accurate expectations: this is an experimental model that incorporates new research techniques we will use in future models. We think you will love GPT-5, but we don't plan to release a model with IMO gold level of capability for many months,' he wrote on X. Two early testers of the new model told Reuters they have been impressed with its ability to code and solve science and math problems, but they believe the leap from GPT-4 to GPT-5 is not as large as the one from GPT-3 to GPT-4. The testers, who have signed non-disclosure agreements (NDAs), declined to be named for this story. OpenAI declined to comment to Reuters. GPT-4's leap was based on more compute power and data, and the company was hoping that 'scaling up' in a similar way would consistently lead to improved AI models. But OpenAI, which is backed by Microsoft and is currently valued at $300 billion, ran into issues scaling up. One problem was the data wall the company ran into, and OpenAI's former chief scientist, Ilya Sutskever, said last year that while processing power was growing, the amount of data was not. He was referring to the fact that large language models (LLMs) are trained on massive datasets that scrape the entire internet, and AI labs have no other options for large troves of human-generated textual data. Apart from the lack of data, another problem was that 'training runs' for large models are more likely to have hardware-induced failures given how complicated the system is, and researchers may not know the eventual performance of the models until the end of the run, which can take months. OpenAI has not said when GPT-5 will be released, but the industry expects it to be any day now, according to media reports. Boris Power, head of Applied Research at OpenAI, said in an X post on Monday: 'Excited to see how the public receives GPT-5.' 'OpenAI made such a great leap from GPT-3 to GPT-4, that ever since then, there has been an enormous amount of anticipation over GPT-5,' said Navin Chaddha, managing partner at venture capital fund Mayfield, who invests in AI companies but is not an OpenAI investor. 'The hope is that GPT-5 will unlock AI applications that move beyond chat into fully autonomous task execution.' Nearly three years ago, ChatGPT introduced the world to generative AI, dazzling users with its ability to write humanlike prose and poetry, quickly becoming one of the fastest-growing apps ever. In March 2023, OpenAI followed up ChatGPT with the release of GPT-4, a large language model that made huge leaps forward in intelligence. While GPT-3.5, an earlier version of the model, received a bar exam score in the bottom 10%, GPT-4 passed the simulated bar exam in the top 10%. GPT-4 then became the model to beat and the world came to terms with the fact that AI models could outperform humans in many tasks. Soon, other companies were catching on. The same year, Alphabet's Google and Anthropic – which is backed by Amazon and Google – released competitive models to GPT-4. Within a year, open-source models on par with the GPT-4, such as Meta Platforms' Llama 3 models, were released. Along with training large models, OpenAI has now invested in another route, called 'test-time compute,' which channels more processing power to solve challenging tasks such as math or complex operations that demand human-like reasoning and decision-making. Altman said earlier this year that GPT-5 would combine both test-time compute and its large models. He also said that OpenAI's model and product offerings had become 'complicated.'


Qatar Tribune
3 hours ago
- Qatar Tribune
Meta working to thwart WhatsApp scammers
Agencies Meta on Tuesday said it shut nearly seven million WhatsApp accounts linked to scammers in the first half of this year and is ramping up safeguards against such schemes. 'Our team identified the accounts and disabled them before the criminal organizations that created them could use them,' WhatsApp external affairs director Clair Deevy said. Often run by organized gangs, the scams range from bogus cryptocurrency investments to get-rich-quick pyramid schemes, WhatsApp executives said in a briefing. 'There is always a catch and it should be a red flag for everyone: you have to pay upfront to get promised returns or earnings,' Meta-owned WhatsApp said in a blog post. WhatsApp detected and banned more than 6.8 million accounts linked to scam centers, most of them in Southeast Asia, according to Meta. WhatsApp and Meta worked with OpenAI to disrupt a scam traced to Cambodia that used ChatGPT to generate text messages containing a link to a WhatsApp chat to hook victims, according to the tech 'safety overviews' provide information about the group and tips on spotting scams, along with the option of making a quick exit. 'We've all been there: someone you don't know attempting to message you, or add you to a group chat, promising low-risk investment opportunities or easy money, or saying you have an unpaid bill that's overdue,' Meta said in a blog post.


Al Jazeera
15 hours ago
- Al Jazeera
Tech giants turning blind eye to child sex abuse, Australian watchdog says
Australia's internet watchdog has accused tech giants including Google and Apple of failing to take action against child sex abuse on their platforms. In a report released on Wednesday, eSafety Commissioner Julie Inman Grant said tech platforms were failing to implement various measures to protect children, including scanning cloud services for known abuse material and using language analysis tools to detect attempted sexual extortion in messaging services. Grant said that Apple and YouTube, which is owned by Google, also failed to track reports of child sex abuse and could not say how long it took them to respond to the reports they received. 'It shows that when left to their own devices, these companies aren't prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services,' Grant said in a statement. 'We need to keep the pressure on the tech industry as a whole to live up to their responsibility to protect society's most vulnerable members from the most egregious forms of harm and that's what these periodic notices are designed to encourage.' Grant added that the companies had taken few steps to improve their efforts since being asked three years ago, 'despite the promise of AI to tackle these harms and overwhelming evidence that online child sexual exploitation is on the rise'. 'No other consumer-facing industry would be given the licence to operate by enabling such heinous crimes against children on their premises, or services,' she said. Google disputed the report's findings, saying they were rooted in 'reporting metrics, not online safety performance' and that more than 99 percent of abuse material on YouTube is automatically removed before being flagged. 'Child safety is critical to us,' a Google spokesperson said. 'We've led the industry fight against child sexual abuse material since day one, investing heavily in advanced technology to proactively find and remove this harmful content.' Apple, Microsoft, Meta, Snap, and Discord, which were also included in the report, did not respond to requests for comment. Tom Sulston, head of policy at Digital Rights Watch, said that while it was important for authorities to take action against online child abuse, some of the tools supported by the internet watchdog would raise serious civil liberties and privacy concerns. Sulston said that scanning live calls and private messages would require platforms to abandon end-to-end encryption, which prevents communications from being viewed by anyone apart from the sender and receiver. 'That's a gross invasion of privacy for all of the people making perfectly innocent and reasonable use of the service,' Sulston told Al Jazeera. 'It also has dangerous knock-on effects where the users of that service would be subject to surveillance from hostile actors – foreign governments, criminals, hackers. That's a huge risk for civic society, activists, journalists and anyone who communicates on the internet. Breaking encryption would be 'disproportionate and dangerous,' Sulston added. 'We don't expect the post office to open all letters and read them for illegal content – in fact, most countries have laws specifically against this,' he said.