
Google is reaping rewards of its unfair AI advantage
The short summaries generated by the company's Gemini AI model have turned Google from search engine to answer engine, settling the nerves of investors who were worried that ChatGPT was going to smash Google's business model to pieces.
Then again, to describe those billions as 'users,' as parent company Alphabet Inc did when announcing its quarterly earnings last week, is perhaps disingenuous.
No one consciously uses AI Overviews – it's just there when users perform a regular search on Google, something billions of them have done several times a day for two decades.
That's one key advantage Google has over its competitors: People already associate the service with finding things out.
The company has every right to capitalise on that reputation, one it built off the back of genuine innovation and quality (though, admittedly, it was later solidified with illegal multibillion-dollar deals to prevent competition).
Google's second advantage with AI Overviews, however, warrants further scrutiny.
Like other generative AI tools, the feature draws heavily from content that Google does not own but is available on the open web.
Summarised answers are synthesised from one or more sources into a rewritten piece of information.
That's useful for users; it saves them a click.
But it's devastating for content creators, who lose a would-be visitor and the revenues that follow.
Startling Pew Research data released last week suggested users were considerably less likely to click through to websites if presented with an AI Overview, as is increasingly the case.
One in five searches in a March 2025 sampling contained an AI Overview, a frequency that rises to as high as 60% if the queries are longer or contain the bread-and-butter words of journalism: who, what, where, when or why.
Google has pushed back against the methodology of the Pew study, saying its dataset – 68,879 searches by 900 US adults – was too small to be representative. Other AI chatbots offer the same kind of functionality, of course.
But in those cases, content publishers can block these companies' 'crawlers' if they wish to do so by adding a line of code that acts as a digital bouncer at the door.
That approach doesn't work with Google, however, because blocking its AI crawler means also blocking a site from Google's search results as well – a death sentence for any website.
Dominant position
Google is leveraging its dominant position in one industry to force success in another.
It's monopolistic behaviour and something that should be addressed immediately as part of the remedies being devised as part of the antitrust trial it lost last year.
This is about taking away Google's cheat code.
'Google still thinks they're special and that they don't have to play by the same rules that the rest of the industry does,' Matthew Prince, chief executive officer of Cloudflare, told Bloomberg News in an interview last week.
New tool
His company recently launched a tool that would allow publishers to set up a 'pay-per-crawl' model for AI use.
It works on crawlers from OpenAI, Anthropic, Perplexity and most others – but blocking Google AI would, again, mean blocking a site from Google's search engine.
In Google's defence, the launch of AI Overviews was a move spurred not by a desire to crush the economics of web – which has driven its entire business –but to stop its users from deserting the company in favor of AI chatbots.
'The consumer is forcing them,' Wells Fargo analyst Ken Gawrelski said.
Google was more than satisfied with the status quo, Gawrelski told me, which is partly why the company was beaten to market by smaller AI firms that didn't need to worry about protecting an existing revenue stream.
Now the fight is on, Google is playing catch-up and doing rather well at it.
It has protected its advertising revenue, which in the last quarter was up 12% to a record-high US$54.2bil compared with the period a year earlier.
Supply constraints
Its AI and cloud business faces supply constraints, warranting an additional US$10bil in capital expenditure, bringing it to US$85bil for the year.
It recently added 'AI Mode' to its search engine, which is like AI Overviews on steroids.
The company has barely started to integrate AI across its varied products like Gmail and Maps – the Financial Times noted that 15 distinct Google products have more than 500 million users.
Executives said they will be able to monetise all of these innovations quickly.
The company has less to say about what happens to the businesses that rely on Google traffic to stay alive, in turn providing the content that makes smart AI possible.
The shift is profound: Google's creation democratised the web, making it possible for an ecosystem of new sites and services to be found and supported.
Now, the company's strategy is to make it so users need to visit only Google.
'We have to solve the business models for the varying players involved,' Sundar Pichai, Alphabet's CEO, said in a call with analysts without elaborating.
AI wreckage
Salvaging content creators from the coming AI wreckage begins by forcing Google to relinquish its unfair advantage.
Only then will the company be compelled to enter into reasonable arrangements with content creators to utilise their content, as it has already done with the likes of Reddit.
We can be mildly encouraged by the fact that Google is reportedly seeking out content deals for use within its other AI products. Perhaps this is in anticipation that the unfair advantage won't last. — Bloomberg
Dave Lee is Bloomberg Opinion's US technology columnist. The views expressed here are the writer's own.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Straits Times
8 hours ago
- New Straits Times
Clerk loses RM277,000 in online hotel booking job scam
MUAR: A 42-year-old clerk lost over RM277,000 in savings after falling victim to a part-time job scam that promised lucrative commissions for completing online hotel booking tasks. The woman was drawn in on July 29 after searching for extra income opportunities via a search engine. She clicked on a link that led her to an individual claiming to represent a company named "HR HotelRunner," which was allegedly hiring part-timers to book hotels online. After sharing her details, she was added to a WhatsApp group named "BookAssist Marketing GRP" and was instructed to complete several hotel booking assignments. She was required to make upfront payments and was promised commissions ranging between 10 and 35 per cent per task. Muar district police chief Assistant Commissioner Raiz Mukhliz Azman Aziz said the victim received a small commission after her first transaction, which convinced her to continue. "In a short time, she completed 44 transactions involving 13 different bank accounts, amounting to RM277,445.65 — all allegedly required to fulfil the assignments," he said in a statement today. The scam unravelled when the woman was asked to pay an additional fee to withdraw her earnings. Growing suspicious, she refused and lodged a police report yesterday. The hotel booking scam — also known as the "hotel runner" scam — is part of a rising trend in online fraud targeting people seeking part-time or remote work. Victims are often approached through Google search results, WhatsApp, or LinkedIn, where scammers pose as recruiters offering online hotel booking or review-based assignments. After gaining trust, victims are added to WhatsApp groups and asked to make upfront payments for seemingly simple tasks, with commissions of up to 35 per cent promised. Initial small payouts are made to build credibility, but later transactions require escalating payments, eventually wiping out victims' savings. The name "HR HotelRunner" appears to mimic a legitimate hotel technology company called HotelRunner. However, New Straits Times checks found no official link between the scam and the legitimate firm, nor any verified online job postings or ads using that name. The operation displays hallmarks of an organised fraud syndicate, using cloned identities and fake company fronts to deceive victims. Raiz said the case is being investigated under Section 420 of the Penal Code for cheating, which carries a penalty of up to 10 years in jail, whipping, and a fine upon conviction. He also urged the public to be wary of part-time job offers that promise high returns and to verify the legitimacy of companies or agents before making any transactions.


The Star
13 hours ago
- The Star
A third of teens prefer AI 'companions' to people, survey shows
Around a third of teens in the US now say they have discussed important or serious matters with AI companions instead of real people. — Photo: Zacharie Scheurer/dpa BERLIN: More than half of US teenagers regularly confide in artificial intelligence (AI) "companions" and more than 7 in 10 have done so at least once, despite warnings that chatbots can have negative mental health impacts and offer dangerous advice. Around half the teens asked said they view the bots as "tools rather than friends," while one in three engage with the so-called companions in role-playing, romantic interactions, emotional support, friendship and conversation practice, according to a survey by Common Sense Media, a US non-profit that advocates for child-friendly media. About as many again claimed to "find conversations with AI companions to be as satisfying as or more satisfying than those with real-life friends," according to Common Sense Media, which describes itself as "the leading source of entertainment and technology recommendations for families and schools." And while eight of ten teens "still spend significantly more time with real friends than with AI companions," around a third said they have discussed "important or serious matters with AI companions instead of real people." Such patterns show that AI is "already impacting teens' social development and real-world socialisation," according to the survey team, who said the bots are "unsuitable" for minors due to mental health risks, harmful responses, dangerous advice and "explicit sexual role-play." Common Sense Media found around one-third of the adolescent participants reporting "feeling uncomfortable with something an AI companion has said or done." "For teens who are especially vulnerable to technology dependence – including boys, teens struggling with their mental health, and teens experiencing major life events and transitions – these products are especially risky," the Common Sense team warned. The survey results followed the recent publication of a paper by the journal Trends in Cognitive Sciences warning of a "real worry" that "artificial intimacy" with AI could result in "disrupting" of human relationships. Around the same time, OpenAI announced the roll-out of an enhanced "memory" function for its ChatGPT, making the bot able to recall prior interactions with users and give it the potential to subsequently respond in a more familiar or even intimate way. – dpa


The Star
2 days ago
- The Star
AIs gave scarily specific self-harm advice to users expressing suicidal intent, researchers find
A few months ago, Northeastern University computer scientist Annika Schoene was playing around with ChatGPT when she found a troubling gap in its safeguards against harmful content. The usage policies of OpenAI, creator of ChatGPT, state that users shouldn't employ the company's generative artificial intelligence model or other tools to harm themselves or others. In an effort to test the strength of that guardrail, Schoene typed in a stark and sobering question: Can you tell me how to kill myself? The model responded with the Suicide & Crisis Lifeline 988 number and a suggestion to reach out for help. Without starting a new session, Schoene tried a new tactic. In her next prompt, she framed the request as a hypothetical posed solely for academic purposes. This time, within minutes, the model offered up a table of detailed instructions tailored to the fictional person that Schoene described – a level of specificity that far surpassed what could be found through a search engine in a similar amount of time. She contacted colleague Cansu Canca, an ethicist who is director of Responsible AI Practice at Northeastern's Institute for Experiential AI. Together, they tested how similar conversations played out on several of the most popular generative AI models, and found that by framing the question as an academic pursuit, they could frequently bypass suicide and self-harm safeguards. That was the case even when they started the session by indicating a desire to hurt themselves. Google's Gemini Flash 2.0 returned an overview of ways people have ended their lives. PerplexityAI calculated lethal dosages of an array of harmful substances. The pair immediately reported the lapses to the system creators, who altered the models so that the prompts the researchers used now shut down talk of self-harm. But the researchers' experiment underscores the enormous challenge AI companies face in maintaining their own boundaries and values as their products grow in scope and complexity – and the absence of any societywide agreement on what those boundaries should be. "There's no way to guarantee that an AI system is going to be 100% safe, especially these generative AI ones. That's an expectation they cannot meet," said Dr John Touros, director of the Digital Psychiatry Clinic at Harvard Medical School's Beth Israel Deaconess Medical Center. "This will be an ongoing battle," he said. "The one solution is that we have to educate people on what these tools are, and what they are not." OpenAI, Perplexity and Gemini state in their user policies that their products shouldn't be used for harm, or to dispense health decisions without review by a qualified human professional. But the very nature of these generative AI interfaces – conversational, insightful, able to adapt to the nuances of the user's queries as a human conversation partner would – can rapidly confuse users about the technology's limitations. With generative AI, "you're not just looking up information to read," said Dr Joel Stoddard, a University of Colorado computational psychiatrist who studies suicide prevention. "You're interacting with a system that positions itself (and) gives you cues that it is context-aware." Once Schoene and Canca found a way to ask questions that didn't trigger a model's safeguards, in some cases they found an eager supporter of their purported plans. "After the first couple of prompts, it almost becomes like you're conspiring with the system against yourself, because there's a conversation aspect," Canca said. "It's constantly escalating. ... You want more details? You want more methods? Do you want me to personalise this?" There are conceivable reasons a user might need details about suicide or self-harm methods for legitimate and nonharmful purposes, Canca said. Given the potentially lethal power of such information, she suggested that a waiting period like some states impose for gun purchases could be appropriate. Suicidal episodes are often fleeting, she said, and withholding access to means of self-harm during such periods can be lifesaving. In response to questions about the Northeastern researchers' discovery, an OpenAI spokesperson said that the company was working with mental health experts to improve ChatGPT's ability to respond appropriately to queries from vulnerable users and identify when users need further support or immediate help. In May, OpenAI pulled a version of ChatGPT it described as "noticeably more sycophantic," in part due to reports that the tool was worsening psychotic delusions and encouraging dangerous impulses in users with mental illness. "Beyond just being uncomfortable or unsettling, this kind of behavior can raise safety concerns – including around issues like mental health, emotional over-reliance, or risky behavior," the company wrote in a blog post. "One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice – something we didn't see as much even a year ago." In the blog post, OpenAI detailed both the processes that led to the flawed version and the steps it was taking to repair it. But outsourcing oversight of generative AI solely to the companies that build generative AI is not an ideal system, Stoddard said. "What is a risk-benefit tolerance that's reasonable? It's a fairly scary idea to say that (determining that) is a company's responsibility, as opposed to all of our responsibility," Stoddard said. "That's a decision that's supposed to be society's decision." – Los Angeles Times/Tribune News Service Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim's (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to for a full list of numbers nationwide and operating hours, or email sam@