‘Bit damning': How Google decided to cut publishers out of AI search
While using website data to build a Google Search topped with artificial intelligence-generated answers, an Alphabet executive acknowledged in an internal document that there was an alternative way to do things: They could ask web publishers for permission, or let them directly opt out of being included.
But giving publishers a choice would make training AI models in search too complicated, the company concludes in the document, which was unearthed in the company's search antitrust trial. It said Google had a 'hard red line' and would require all publishers who wanted their content to show up in the search page to also be used to feed AI features. Instead of giving options, Google decided to 'silently update', with 'no public announcement' about how they were using publishers' data, according to the document, written by Chetna Bindra, a product management executive at Google Search. 'Do what we say, say what we do, but carefully.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


7NEWS
2 days ago
- 7NEWS
Named and shamed: Swadesh Indian Restaurant in Baldivis hit with $40k fine for filthy kitchen
An Indian restaurant in Baldivis has been slapped with a fine over a filthy kitchen and dodgy hygiene practices. The owners of Swadesh Indian Restaurant, which has four-and-a-half stars on Google, were fined $40,000 and ordered to pay $24,000 in costs after health inspectors found the kitchen was not up to scratch. The restaurant opened in 2018 and attracts a swathe of good reviews from locals who praise the freshness of dishes and delicious flavours. On Facebook, the team promises to provide 'the freshest ingredients, highest possible quality, all beautifully prepared and presented so that a typical dinner becomes a great experience'. But Swadesh's kitchen has not been up to standard since 2022, when City of Rockingham inspectors found food was not stored in a way to prevent contamination, handwashing facilities were not maintained and there was an accumulation of food waste, dirt and grease. When inspectors returned in 2023, they found the kitchen still hadn't been maintained to the expected standard, with handwashing and dirty equipment found to be a problem yet again. The $40,000 fine is the biggest handed down to a food business in 2025. Last year, Lavoro Italiano Restaurant, also in the City of Rockingham, was fined the same amount when inspectors found crawling cockroaches and cigarette butts in the dry storage. Prosecutors described the kitchen as one of the worst they'd seen in WA, saying: 'Cockroaches seen during the day indicates a serious infestation. When they were pointed out, the owner was not surprised'. But a Nandos in Willetton copped the biggest fine of 2024 when it was hit with $160,000 for being filthy, crawling with rats and selling food past its use-by date. This was followed by Belmont-based Aquarium Seafood Chinese Restaurant, which was fined $80,000 for being filthy and riddled with pests.


Perth Now
2 days ago
- Perth Now
Filthy Indian restaurant south of Perth hit with $40k fine
An Indian restaurant in Baldivis has been slapped with a fine over a filthy kitchen and dodgy hygiene practices. The owners of Swadesh Indian Restaurant, which has four-and-a-half stars on Google, were fined $40,000 and ordered to pay $24,000 in costs after health inspectors found the kitchen was not up to scratch. The restaurant opened in 2018 and attracts a swathe of good reviews from locals who praise the freshness of dishes and delicious flavours. On Facebook, the team promises to provide 'the freshest ingredients, highest possible quality, all beautifully prepared and presented so that a typical dinner becomes a great experience'. Swadesh attracts frequent good reviews from locals who praise the 'delicious flavours'. Credit: Ajith Kumar Photography But Swadesh's kitchen has not been up to standard since 2022, when City of Rockingham inspectors found food was not stored in a way to prevent contamination, handwashing facilities were not maintained and there was an accumulation of food waste, dirt and grease. When inspectors returned in 2023, they found the kitchen still hadn't been maintained to the expected standard, with handwashing and dirty equipment found to be a problem yet again. The $40,000 fine is the biggest handed down to a food business in 2025. The Indian restaurant promises the 'highest possible quality'. Credit: Ajith Kumar Photography Last year, Lavoro Italiano Restaurant, also in the City of Rockingham, was fined the same amount when inspectors found crawling cockroaches and cigarette butts in the dry storage. Prosecutors described the kitchen as one of the worst they'd seen in WA, saying: 'Cockroaches seen during the day indicates a serious infestation. When they were pointed out, the owner was not surprised'. But a Nandos in Willetton copped the biggest fine of 2024 when it was hit with $160,000 for being filthy, crawling with rats and selling food past its use-by date. This was followed by Belmont-based Aquarium Seafood Chinese Restaurant, which was fined $80,000 for being filthy and riddled with pests.


The Advertiser
3 days ago
- The Advertiser
YouTube, Meta, TikTok reveal misinformation tidal wave
Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles. Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles. Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles. Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.