
Google Decided Against Offering Publishers Options In AI Search
While using web site data to build a Google Search topped with artificial intelligence-generated answers, an Alphabet Inc. executive acknowledged in an internal document that there was an alternative way to do things: They could ask web publishers for permission, or let them directly opt out of being included.
But giving publishers a choice would make training AI models in search too complicated, the company concludes in the document, which was unearthed in the company's search antitrust trial. It said Google had a "hard red line" and would require all publishers who wanted their content to show up in the search page to also be used to feed AI features. Instead of giving options, Google decided to "silently update," with "no public announcement" about how they were using publishers' data, according to the document, written by Chetna Bindra, a product management executive at Google Search. "Do what we say, say what we do, but carefully."
Google's dominance in search, which a federal court ruled last year is an illegal monopoly, has given it a decisive advantage in the ongoing AI wars. According to Google's rules - and previous trial testimony from a company vice-president of product - the tech giant may use the content that feeds into its search engine results to develop other search-related AI products. Publishers can only shield their data from search AI if they opt out of search altogether, Google has said.
Site owners that rely on traffic can't afford to skip listing on Google, which still holds more than 90% of the search market, making it a gateway to the modern web. Many have reluctantly let Google use their content to power search AI features, like AI Overviews, which provides AI-generated responses for some queries - despite the fact that the feature often eats into their traffic. By answering questions directly, AI Overviews obviates the need for users to click on links, depriving sites of opportunities to make money by showing ads and selling products.
The Google document displayed in court shows the company recognized from the beginning the possibility of giving publishers more control, said Paul Bannister, the chief strategy officer at Raptive, which represents online creators.
"It's a little bit damning," he said. "It pretty clearly shows that they knew there was a range of options and they pretty much chose the most conservative, most protective of them - the option that didn't give publishers any controls at all."
Google was recently on trial in Washington as a federal judge mulled what steps the tech giant must take to restore competition in online search. Judge Amit Mehta, who presided over the hearings, is now considering a set of remedies proposed by antitrust enforcers aimed at curbing Google's market dominance. The final day of testimony was May 9, with closing arguments set for later this month. A ruling on the proposed remedies is expected in August.
One part of the Justice Department's proposal is compelling Google to give online publishers and creators a way to opt out of having the content of their web pages be used to train Google's generative AI models "on a model-by-model basis," as well as having an opt-out for individual generative AI products "on a product-by-product basis," without penalty.
Among the options discussed in internal company slides, Google listed the possibility of "SGE-only opt-outs" - which would have let publishers opt out of having their content used in some generative AI features in Google Search, without disappearing from the search engine itself. One item under discussion would have allowed publishers to "choose to opt their content out of being displayed within" AI Overviews, though their data "would still be used for training purposes." Another, which Google presented as the most extreme, would have let publishers "opt out of their data being used for grounding" - a process in which Google and other AI companies anchor their models in real-world sources, with the aim of preventing AI from making up information and making its responses more accurate.
Google ultimately chose to give publishers no new options. The presentation advised introducing "no new controls BUT reposition publicly" to point publishers toward an existing opt-out called "no snippet," that allows publishers to be exempt from AI Overviews and other search features. Choosing this option also causes summaries of their website to disappear from the search page, making people unlikely to click on the link.
"Publishers have always controlled how their content is made available to Google as AI models have been built into Search for many years, helping surface relevant sites and driving traffic to them," a Google spokesperson said in a statement in response to questions about the trial exhibit from Bloomberg. "This document is an early-stage list of options in an evolving space and doesn't reflect feasibility or actual decisions." They added that Google continually updates its product documentation for search online.
The document shown in court included recommendations for how company representatives might consider communicating the information, as well as what not to say explicitly. "If aligned, as a next step, we will work on actual language and get this out," Bindra's document, which was written in April 2024, concluded. One month later, at its annual developers conference in Mountain View, California, Google broadly infused search with AI, in what it called a "fully revamped" experience.
In the year since AI Overviews launched, traffic to some publishers' sites has dropped precipitously. Even more significant to publishers in the long run is advancing the development of models that produce something good enough to replace their content, said Brooke Hartley Moy, chief executive officer of Infactory, an AI startup that works with publishers.
"If Google's models get to a point where the human element of content is diminished, then they've kind of signed their own death warrant," Hartley Moy said of publishers.
As publishers search for new revenue streams, allowing their content to be used for retrieval augmented generation, or RAG - a technique in which AI models refer back to specific sources to provide more accurate responses - has emerged as a promising contender, Hartley Moy said. That's what makes Google's move to take RAG off the negotiating table so significant, she said.
"RAG doesn't exist without publishers," Hartley Moy said. "To me, this is a strategy in ensuring that Google has full market power, and the publishers lose one of their key chips in the negotiation."
Under questioning by Google lawyer Kenneth Smurzynski, Liz Reid, the company's head of search, testified that creating multiple opt-outs for different products and models would be challenging.
"That would mean if Search has multiple GenAI features on the page, which it can easily do, each of those would be required to have a separate model powering it. But we don't build separate models for those," Reid said, according to court transcripts of the trial testimony on May 6.
"And so by saying a publisher could be like, 'I want to be in this feature but not that feature,' it doesn't work," she continued. "Because then we would essentially have to say, every single feature on the page needs a different model." This would be very costly not only because of the significant investment in hardware and chips that it would require, Reid said, but also because it would be a challenge to ensure the different AI models operated efficiently and delivered fast responses. "It adds enormous complexity," she testified.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
2 hours ago
- Time of India
Too ‘founder-y' to hire, not corporate enough to fit in: Former startup owner's candid job hunt post strikes a chord on Reddit
It's a story that doesn't get much attention, TED talks, or VC applause—but one that's becoming increasingly common. A Reddit post on r/StartUpIndia from a former Indian startup founder has opened the floodgates of empathy and brutal truth about the aftermath of a failed entrepreneurial journey . After building a health-focused food and beverage product with two friends—bootstrapping, multitasking, and learning everything from scratch—he now finds himself lost in a job market that doesn't know where to place him. 'We Built, We Burned, Now What?' The post reads like a quiet elegy to a dream that once soared. The founder speaks of wearing every possible hat—from marketing and finance to customer support and design. No AI tools then, just endless Google searches and YouTube tutorials. Despite the hustle, the venture hit a ceiling—unable to scale, with no funds left to pump in. His partners stepped away, and the founder reluctantly began job hunting, only to be met with silence or, worse, rejection for being 'overqualified,' 'too founder-y,' or 'not domain-specific enough.' The Dilemma of the 'Misfit' He's not asking for a CXO role. Just a chance to contribute—to bring the value of lived, practical experience into structured setups. But the corporate world, as netizens pointed out, often sees people like him as unsafe bets. 'They want people who can innovate within their control,' one user wrote, echoing a harsh truth. In a system that prizes predictability over potential, founders are sometimes viewed with suspicion. Netizens Share Brutal Truths and Hope The post has garnered strong, supportive reactions from fellow Redditors and professionals. One suggested looking into startups within the same industry. Another spoke about the Indian corporate system's rigidity when it comes to reabsorbing former entrepreneurs. 'You, my friend, will have to search harder,' they wrote. Yet another comment advised the path of an Entrepreneur-in-Residence (EiR), using past mistakes as fuel for new guidance. There's also a silver lining. As one commenter noted, 'Take one skill you truly own… and share it.' Whether through learning platforms, consultancy, or new ventures, those hard-earned insights don't need to die quietly. They can be repurposed and rebranded. Because in this era of AI-led disruption and rapid change, real experience still holds immense, if underappreciated, power. The Bottom Line Not all founder stories end with acquisition, IPOs, or Forbes covers. Some end in silence, in resumes ignored, in doors half-open. But these quiet chapters deserve to be told—because they are real, raw, and deeply human. And perhaps, like the Redditor in question, others floating in the 'in-between' will find solace in knowing they're not alone.


Mint
2 hours ago
- Mint
WWDC 2025: Apple could let developers use its AI models to build new features, says report
Apple's Worldwide Developer Conference is all set to take off from 9 June, marking the Cupertino-based tech giant's second-biggest event of the year where it demonstrates all its software prowess. Over the last year, Apple has received a lot of flak for its patchy rollout of new software updates, along with delays in AI features, some of which still haven't arrived. However, the company will likely try to shift attention from those concerns as it focuses on a major rebrand with the unveiling of iOS 26 at WWDC 2025, a significant departure from the current naming scheme used by Cupertino. Gurman says the new AI features to be unveiled by Apple at WWDC 2025 will be minor and are unlikely to impress industry watchers, especially at a time when the pace of AI progress is accelerating, with companies like Google, Meta, Microsoft and OpenAI announcing new products almost every month. Among the AI features Apple is reportedly planning this year is a systemwide push into translation. The Translate functionality will be integrated across Apple operating systems as part of Apple Intelligence, with its main use case being live translation of phone calls and text messages. Gurman adds that the biggest AI-related announcement by Apple will be opening up its large language models (LLMs), the building blocks behind Apple Intelligence, to outside app developers. This will allow them to build their own AI features using the same technology that powers tools like Genmoji and Apple's writing aids. Apple is also expected to announce an upgraded version of its foundation models for both on-device and cloud use at WWDC 2025. Notably, developers will be given access to the on-device version of these LLMs. In terms of other AI announcements, there is unlikely to be any update on Apple's long awaited Siri revamp. Moreover, the Apple's partnership with Google to use Gemini in its apps is also unlikely to be announced at this year's WWDC.


Hindustan Times
4 hours ago
- Hindustan Times
You can now schedule tasks with Gemini as Google's powerful new AI feature rivals ChatGPT's capabilities
Google is steadily evolving Gemini into a smarter, more proactive AI assistant that now competes directly with OpenAI's ChatGPT. The tech giant has started rolling out a feature called Scheduled Actions, which lets users automate recurring or timed tasks without repeating commands. Originally previewed during Google I/O, Scheduled Actions is now arriving on both Android and iOS devices. The feature is currently available to subscribers of Google One AI Premium and select Google Workspace business and education plans. With this rollout, Google is pushing Gemini closer to becoming a fully integrated productivity companion. Scheduled Actions let users instruct Gemini to perform specific tasks at set times or intervals. This includes sending daily calendar summaries, weekly content prompts, or even one time reminders. Once scheduled, Gemini handles them automatically in the background with no follow up required. For example, a user might say, 'Send me a summary of today's meetings every morning at 8 AM' or 'Generate weekly blog ideas every Friday at 10 AM.' These tasks run quietly behind the scenes, transforming Gemini from a reactive chatbot into a daily-use productivity tool. The setup process is built to be intuitive, making automation easy for both everyday users and professionals. Within the Gemini app, users can define a task, set the time, and choose the frequency through a clean and accessible interface. Scheduled Actions puts Google in direct competition with the kind of automation ChatGPT users create through Zapier or custom workflows. What gives Gemini a clear edge is its deep integration with Google's suite of apps. Functioning across Gmail, Calendar, Docs, and Tasks, Gemini offers a smooth setup and efficient task execution experience. Since it is built into tools people already use, Gemini can interact directly with information across Google's ecosystem. There is no need for third party services or custom scripts. For users already invested in Google's platform, the experience is more seamless than ChatGPT's dependence on external integrations. Scheduled Actions signals a shift in expectations for how AI assistants should function. Instead of waiting for commands, Gemini can now anticipate and handle repetitive tasks, offering a more personal and assistant like experience. While this may be just the beginning, it is a clear step toward positioning Gemini as a truly productivity first AI assistant. And as Gemini continues to evolve, it may not just catch up to ChatGPT but define the next generation of digital assistance.