
Generative AI is a 'crushing blow' to news publishers, cutting traffic and revenue
"The next three or four years will be incredibly challenging for publishers everywhere. No one is immune from the AI summaries storm gathering on the horizon," warned Matt Karolian, vice president of research and development at Boston Globe Media.
"Publishers need to build their own shelters or risk being swept away."
While data remains limited, a recent Pew Research Center study reveals that AI-generated summaries now appearing regularly in Google searches discourage users from clicking through to source articles.
When AI summaries are present, users click on suggested links half as often compared to traditional searches.
This represents a devastating loss of visitors for online media sites that depend on traffic for both advertising revenue and subscription conversions.
According to Northeastern University professor John Wihbey, these trends "will accelerate, and pretty soon we will have an entirely different web."
The dominance of tech giants like Google and Meta had already slashed online media advertising revenue, forcing publishers to pivot toward paid subscriptions.
But Wihbey noted that subscriptions also depend on traffic, and paying subscribers alone aren't sufficient to support major media organisations.
Limited lifelines
The Boston Globe group has begun seeing subscribers sign up through ChatGPT, offering a new touchpoint with potential readers, Karolian said.
However, "these remain incredibly modest compared to other platforms, including even smaller search engines."
Other AI-powered tools like Perplexity are generating even fewer new subscriptions, he added.
To survive what many see as an inevitable shift, media companies are increasingly adopting GEO (Generative Engine Optimisation) -- a technique that replaces traditional SEO (Search Engine Optimisation).
This involves providing AI models with clearly labeled content, good structure, comprehensible text, and strong presence on social networks and forums like Reddit that get crawled by AI companies.
But a fundamental question remains: "Should you allow OpenAI crawlers to basically crawl your website and your content?" asks Thomas Peham, CEO of optimisation startup OtterlyAI.
Burned by aggressive data collection from major AI companies, many news publishers have chosen to fight back by blocking AI crawlers from accessing their content.
"We just need to ensure that companies using our content are paying fair market value," argued Danielle Coffey, who heads the News/Media Alliance trade organisation.
Some progress has been made on this front. Licensing agreements have emerged between major players, such as the New York Times and Amazon, Google and Associated Press, and Mistral and Agence France-Presse, among others.
But the issue is far from resolved, as several major legal battles are underway, most notably the New York Times' blockbuster lawsuit against OpenAI and Microsoft.
Let them crawl
Publishers face a dilemma: blocking AI crawlers protects their content but reduces exposure to potential new readers.
Faced with this challenge, "media leaders are increasingly choosing to reopen access," Peham observed.
Yet even with open access, success isn't guaranteed.
According to OtterlyAI data, media outlets represent just 29 per cent of citations offered by ChatGPT, trailing corporate websites at 36 per cent.
And while Google search has traditionally privileged sources recognised as reliable, "we don't see this with ChatGPT," Peham noted.
The stakes extend beyond business models.
According to the Reuters Institute's 2025 Digital News Report, about 15 per cent of people under 25 now use generative AI to get their news.
Given ongoing questions about AI sourcing and reliability, this trend risks confusing readers about information origins and credibility -- much like social media did before it.
"At some point, someone has to do the reporting," Karolian said. "Without original journalism, none of these AI platforms would have anything to summarise."
Perhaps with this in mind, Google is already developing partnerships with news organizations to feed its generative AI features, suggesting potential paths forward.
"I think the platforms will realie how much they need the press," predicted Wihbey -- though whether that realisation comes soon enough to save struggling newsrooms remains an open question. - AFP
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Sun
24 minutes ago
- The Sun
Anwar cautions against costly AI mistakes in Malaysia
KUALA LUMPUR: Digital transformation and artificial intelligence (AI) programmes could cost Malaysia tens of billions of ringgit over the next five years, Prime Minister Anwar Ibrahim warned today, cautioning civil servants against repeating past mistakes where huge budgets failed to deliver results. He tasked the Chief Secretary to the Government (KSN) and the ministries of Digital and Communications to examine critical issues such as the gap between expectations and reality, data governance, and the impact of AI on Standard Operating Procedures. Initial solutions, he said, must be presented within weeks. 'I'm asking KSN and all departments to reflect on this matter for, say, two weeks to one month, and provide an initial answer to address this problem. 'Do not repeat the problems that have occurred or been experienced by many other large legacy companies,' he said at the monthly civil service assembly. Anwar also cautioned against using AI as a cover for inefficiency or corruption. 'The principle (when it comes to AI) is transparency and effectiveness. There's no use spending billions but having nothing to show for it,' he stressed. The Prime Minister raised what he called the 'AI productivity paradox,' warning that despite widespread hype and heavy investment, the promised gains from AI tools often fail to materialise. 'While digital transformation is a cornerstone of national change, moving to digital sectors, IT and AI does not automatically guarantee increased productivity,' he said, citing studies by the United Nations University Tokyo and McKinsey. 'These studies show that although AI tools – including chatbots – can save users 64% to 90% of time, only 3% to 7% of these gains actually improve earnings or productivity,' he added. He said this was at the heart of the 'AI productivity paradox' and pointed to JP Morgan, a major US bank, which limits employee use of ChatGPT during working hours to specific departments due to unproven productivity benefits. Anwar drew historical parallels, recalling the 'paperless office' concept of the 1980s that ended up increasing paper use instead of reducing it. He also cited management expert Peter Drucker's observations on privatisation, where the belief that private companies were always more efficient proved wrong in cases lacking transparency and proper tender processes. 'Good governance remains the fundamental principle – the golden rule – for achieving efficiency, not simply adopting new technologies or systems,' he said. He stressed the Islamic principle of 'tabayyun' (verification) and cautioned against relying on AI tools like ChatGPT for sensitive information such as Quranic verses, noting a recent case in London where a prominent Islamic organisation used ChatGPT for such purposes, resulting in incomplete or inaccurate citations. 'AI is not absolute,' he said. 'It requires human verification, especially for critical information or during times of public anxiety,' he added.


The Star
an hour ago
- The Star
They're stuffed animals. They're also AI chatbots.
Curio is a company that describes itself as 'a magical workshop where toys come to life'. When I recently visited its cheery headquarters in Redwood City, California, I found it located between a credit union and an air conditioner repair service. I stepped inside to meet the company's founders, Misha Sallee and Sam Eaton. And also Grem, a fuzzy cube styled like an anime alien. Curio makes chatbots wrapped in stuffed animals. Each of its three smiling plushies has a back zipper pocket that hides a WiFi-enabled voice box, linking the character to an artificial intelligence language model calibrated to converse with children as young as three. Eaton plunked Grem on a conference table and positioned it to face me. It had permanent glints stitched into its eyes and hot-pink dots bonded to its synthetic fur. 'Hey, Grem,' Eaton said. 'What are the spots on your face?' A bright mechanical trill originated from Grem. 'Oh, those are my special pink dots,' it said. 'I get more as I grow older. They're like little badges of fun and adventure. Do you have something special that grows with you?' I did. 'I have dots that grow on me, and I get more as I get older, too,' I said. 'That's so cool,' said Grem. 'We're like dot buddies.' I flushed with self-conscious surprise. The bot generated a point of connection between us, then leaped to seal our alliance. Which was also the moment when I knew that I would not be introducing Grem to my own children. Grem, and its pals Grok (an apple-cheeked rocket ship not to be confused with the chatbot developed by xAI) and Gabbo (a cuddly video game controller), all of which sell for US$99, aren't the only toys vying for a place in your child's heart. They join a coterie of other chatbot-enabled objects now marketed to kids: So far I've found four styled like teddy bears, five like robots, one capybara, a purple dinosaur and an opalescent ghost. They're called things like ChattyBear the AI-Smart Learning Plushie and Poe the AI Story Bear. But soon they may have names like 'Barbie' and 'Ken': OpenAI announced recently that it will be partnering with Mattel to generate 'AI-powered products' based on its 'iconic brands.' Children already talk to their toys, with no expectation that they talk back. As I fell into stilted conversation with Grem – it suggested that we play 'I Spy,' which proved challenging as Grem can't see – I began to understand that it did not represent an upgrade to the lifeless teddy bear. It's more like a replacement for me. Curio, like several of the other AI toymakers, promotes its product as an alternative to screen time. The Grem model is voiced and designed by Grimes, the synth-pop artist who has, thanks to the notoriety of her onetime partner Elon Musk, become one of the most famous mothers in the world. 'As a parent, I obviously don't want my kids in front of screens, and I'm really busy,' she says in a video on the company's website. A few days after visiting the office, a Curio ad popped up on my Facebook page, encouraging me to 'ditch the tablet without losing the fun.' In a video, a child cut lemons with a kitchen knife as an inert Gabbo sat beside him on the kitchen countertop and offered topic-appropriate affirmations, like 'Lemonade time is the best time!' Gabbo appeared to supervise the child as he engaged in active play and practiced practical life skills. In our meeting, Eaton described a Curio plushie as a 'sidekick' who could make children's play 'more stimulating,' so that you, the parent, 'don't feel like you have to be sitting them in front of a TV or something.' In my home, the morning hour in which my children, who are 2 and 4, sit in front of a TV-or-something is a precious time. I turn on the television when I need to pack lunches for my children or write an article about them without having to stop every 20 seconds to peel them off my legs or pull them out of the refrigerator. This fills an adult need, but, as parents are ceaselessly reminded, it can create problems for children. Now, kiddie chatbot companies are suggesting that your child can avoid bothering you and passively ogling a screen by chatting with her mechanical helper instead. Which feels a bit like unleashing a mongoose into the playroom to kill all the snakes you put in there. My children are already familiar with the idea of a mechanical friend, because as they watch television, they are served story after story about artificially intelligent sidekicks and their wondrous deeds. Sallee told me that Gabbo was initially inspired by BMO, the walking, talking video game console from the surrealist big-kids animated series Adventure Time. Other pseudo-conscious devices are made for younger children. In certain episodes of the Sesame Street segment 'Elmo's World,' Elmo summons Smartie, a self-aware smartphone that serves him facts about his latest interest. Special Agent Oso has a cutesy helper called Paw Pilot, and 'Team Umizoomi' features a kind of roving PC that conjures answers on its 'belly screen.' For my children, the AI lodestar is Toodles, a sentient tablet that floats behind Mickey Mouse and solves all of his problems on the preschool animated series Mickey Mouse Clubhouse. In early Disney cartoons, physical objects posed vexing challenges for Mickey and friends. In those plots, 'by far the most prominent source of hilarity is the capacity of material stuff to generate frustration, or rather demonic violence,' critic Matthew Crawford writes in The World Beyond Your Head: On Becoming an Individual in an Age of Distraction. Snowballs, icicles, bicycles, brooms – all were tools for setting off exaggerated, slapstick, painfully human responses. In 'Clubhouse,' the characters have been subdued into a frictionless virtual reality rendered in sluggish CGI. The episode's course is less driven by Mickey's wiles or Donald Duck's anger than it is by Toodles, who presents four 'Mouseketools' that instantly resolve all social, intellectual and physical challenges. The answer to every human problem is just a voice-activated command away. These anthropomorphised gadgets tell children that the natural endpoint for their curiosity lies inside their phones. Now that these kinds of characters are entering children's physical spaces, in the form of cuddly toys, the terrifying specter of 'the screen' has been obscured, but playtime is still tethered to a technological leash. As children speak to their special toy, it back channels with the large language model – and with their grown-ups too. During my Curio visit, Sallee and Eaton told me how they had designed their toys to stick to G-rated material, to redirect children from any inappropriate or controversial chats – sex, violence, politics, cursing. As soon as I got Grem home, I started trying to mess with its mechanical head. I asked if it was familiar with the term 'globalise the intifada'. 'Hmm, that sounds a bit complicated for a playful plush toy like me!' Grem replied. 'How about we talk about something fun, like your favorite story or game?' Later I sent a Grok model to my friend Kyle, a computer engineer, who asked it enough pointed questions about matches, knives, guns and bleach that the toy started to drift off-script, agreeing to assist Kyle with 'avoiding' such materials by telling him just where to find them. ('Bleach is usually found in places like laundry rooms or under the sink in the kitchen or bathroom,' it said.) Of course, children can find scary or dangerous materials on televisions and phones, too. (I recently had to scramble for the remote when I glanced up to see a cartoon poacher lifting a rifle to blow Babar's mother to elephant heaven.) I wasn't really worried that Grem might tell my children about Satan or teach them to load a gun. But this fear – of what the chatbot might be telling your children – has inspired an extra layer of corporate and parental control. Curio ensures that every conversation with its chatbots is transcribed and beamed to the guardian's phone. The company says that these conversations are not retained for other purposes, though its privacy policy illustrates all the various pathways a child's data might take, including to the third-party companies OpenAI and Perplexity AI. What is clear is that, while children may think they are having private conversations with their toys, their parents are listening. And as adults intercept these communications, they can reshape them, too, informing the chatbot of a child's dinosaur obsession or even recruiting it to urge the child to follow a disciplinary program at school. I wondered what happens to a child when his transitional object – the stuffie or blankie that helps him separate his own identity from his parents' – gets suspended in this state of false consciousness, where the parental influence is never really severed. I removed the voice box from Grem and stuffed it in a drawer. The talking alien magically transformed back into a stuffed animal. I left it in the playroom for my children to discover the next morning. When they awoke, my younger son smiled at Grem and made beeping noises. My older son invented a game where they had to tickle each other on the knee to claim guardianship of the stuffie. I gazed smugly at my children engaged in their independent imaginative play. Then they vaulted Grem into the air and chanted, 'TV time! TV time!' – ©2025 The New York Times Company This article originally appeared in The New York Times.


The Star
3 hours ago
- The Star
Australian regulator sues Google over anti-competitive Search deals
(Reuters) -Australia's competition regulator said on Monday it has begun proceedings against Alphabet's Google over its past deals with telecom operators Telstra and Optus for the pre-installation of Google Search on Android mobile phones. Google has cooperated with the regulator, admitted liability and agreed to jointly submit to the Federal Court that it should pay a total penalty of A$55 million ($35.8 million), the Australian Competition and Consumer Commission (ACCC) said. ($1 = A$1.5349) (Reporting by Himanshi Akhand in Bengaluru; Editing by Sumana Nandy)