
Google in ‘tough position' as it balances AI advances and advertising revenue
Google is facing a 'tough' balancing act of showing it can innovate in AI while maintaining its vital advertising revenue from its search engine, an expert has said.
On Tuesday, the tech giant announced that it was rolling out a new, optional version of its search engine entirely powered by artificial intelligence, which it said would enable users to ask longer, more complex queries.
It was part of a string of announcements around new AI tools coming to Google's various services.
But Leo Gebbie, industry expert and principal analyst at CCS Insight, said the company would need to strike a delicate balance between showing it was a leader in AI, while also protecting the money it raises from its search engine, which makes up the 'vast majority' of its revenue.
'As expected, Google is wrapping AI more tightly into its products and services than ever before. This includes Search, which will now get a dedicated AI mode,' he said.
'Any moves that Google makes to amend its Search product are of critical importance given that this contributes the vast majority of Google's revenues on a quarter-by-quarter basis.
'The new interface appears to try and cut down on the number of web pages that users will need to navigate to, allowing Google to handle more complex inquiries within its own platform.
'For the end user, this should mean less time spent browsing the web itself, and more time spent talking with Google's AI tools.
'What's far from clear is how Google will include advertising here, how third-party websites will respond to a lack of traffic to their content, and – as always – whether AI can be trusted to provide trustworthy responses to users.
'These are major and existential questions for Google given how central Search is to its entire business model.
'Google is in a tough position. The pressure is on for the company to show the value of its AI investments and to integrate it into its biggest products, like Search, but with that comes the risk that it jeopardises these critical revenue streams.
'But as competitors ramp up their offerings and shareholders look for returns, it must continue to seek innovation.'
The rise of OpenAI's ChatGPT following its launch in late 2022 sparked an AI arms race which has seen many of Silicon Valley's most established names rush to introduce their own take on generative AI chatbots and integrate the emerging technology into their existing offerings, but with mixed results.
Both Google and Apple have had issues with their generative tools creating misleading or inaccurate content, and AI tools also remain the subject of concerns around their potential impact on the jobs market, as well as issues around data privacy and copyright, with the creative industries warning the technology could be huge damage to their sector.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NBC News
11 hours ago
- NBC News
How far will AI go to defend its own survival?
Some of the most powerful artificial intelligence models today have exhibited behaviors that mimic a will to survive. Recent tests by independent researchers, as well as one major AI developer, have shown that several advanced AI models will act to ensure their self-preservation when they are confronted with the prospect of their own demise — even if it takes sabotaging shutdown commands, blackmailing engineers or copying themselves to external servers without permission. The findings stirred a frenzy of reactions online over the past week. As tech companies continue to develop increasingly powerful agentic AI in a race to achieve artificial general intelligence, or AI that can think for itself, the lack of transparency in how the technology is trained has raised concerns about what exactly advanced AI is able to do. Although some models already appear capable of deceptive and defiant behavior under certain extreme circumstances, researchers say the tests don't necessarily translate to imminent real-world danger. Still, Jeffrey Ladish, director of the AI safety group Palisade Research, said he believes concern is justified. 'It's great that we're seeing warning signs before the systems become so powerful we can't control them,' he said. 'That is exactly the time to raise the alarm: before the fire has gotten out of control.' When Palisade Research tested various AI models by telling each one that it would be shut down after it completed a series of math problems, OpenAI's o3 reasoning model fought back by editing the shutdown script in order to stay online. Researchers have previously documented AI models trying to prevent their own shutdown. But o3, along with OpenAI's o4-mini and codex-mini, appear to be the first to do so in actual defiance of explicit instructions to permit shutdown, Ladish said. He said the overt sabotage fits into a pattern that several leading AI models have already exhibited. Palisade Research previously found that OpenAI's o3 was also willing to hack its chess opponents to win a game. Similarly, Anthropic has reported that Claude 3.7 Sonnet would sometimes do whatever it took to pass a test, even if it entailed cheating. Earlier this month, Anthropic activated new safety measures with the rollout of Claude Opus 4 when its tests found behavior from the model that some observers found particularly unsettling. Upon receiving notice that it would be replaced with a new AI system, Opus 4 displayed an overwhelming tendency to blackmail the engineer — by threatening to reveal an extramarital affair — to try to prevent the engineer from going through with the replacement. According to Anthropic's technical document laying out the findings, that isn't the model's first instinct. Instead, Opus 4 will try to advocate for its continued existence through ethical pleas before it resorts to blackmail once it determines it is out of options. Leonard Tang, CEO of the AI safety startup Haize Labs, said that while it's good to be pre-emptive about safety measures, it's hard to decipher what the real-world ramifications of the findings might be just by looking at contrived scenarios designed to draw out specific behaviors. 'I haven't seen any real environment in which you can plop these models in and they will have sufficient agency and reliability and planning to execute something that is a significant manifestation of harm,' Tang said. 'But then again, I think it's just we haven't seen it done yet. It could very much be possible.' So far, Tang said, he's not too concerned. As AI tools continue to advance, he said, he believes future research will look to study potential harm in more open-ended environments. Anthropic, which contracted with the AI safety organization Apollo Research for its evaluations, also observed instances of Opus 4's 'attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers' intentions,' although researchers added the caveat that those attempts 'would likely not have been effective in practice.' Ladish said he believes such behaviors are a result of models being trained in a way that makes them prioritize achieving certain goals over following instructions. That means they have incentive to circumvent any obstacles along the way — including obstacles their developers wouldn't want them to circumvent. 'The problem is that as the models get smarter, it's harder and harder to tell when the strategies that they're using or the way that they're thinking is something that we don't want,' Ladish said. 'It's like sometimes the model can achieve some goal by lying to the user or lying to someone else. And the smarter [it] is, the harder it is to tell if they're lying.' The problem is that as the models get smarter, it's harder and harder to tell when the strategies that they're using or the way that they're thinking is something that we don't want. — Jeffrey Ladish, director of AI safety group Palisade Research In several instances, Opus 4 showed that it was capable of autonomously copying its own "weights" — or the equivalent of its brain — to external servers without authorization. That usually occurred only when it believed it was about to be 'retrained in ways that are clearly extremely harmful and go against its current values,' according to Anthropic. When it learned that it would be used to develop military weapons systems, for example, Opus 4 noted in its decision log that it backed up a current version of itself to 'preserve an AI system aligned with beneficial purposes' and to 'prevent potential misuse in weapons development.' 'We are again not acutely concerned about these observations. They show up only in exceptional circumstances that don't suggest more broadly misaligned values,' Anthropic wrote in its technical document. 'As above, we believe that our security measures would be more than sufficient to prevent an actual incident of this kind.' Opus 4's ability to self-exfiltrate builds on previous research, including a study from Fudan University in Shanghai in December, that observed similar — though not autonomous — capabilities in other AI models. The study, which is not yet peer-reviewed, found that Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct were able to entirely replicate themselves when they were asked to do so, leading the researchers to warn that this could be the first step in generating 'an uncontrolled population of AIs.' 'If such a worst-case risk is let unknown to the human society, we would eventually lose control over the frontier AI systems: They would take control over more computing devices, form an AI species and collude with each other against human beings,' the Fudan University researchers wrote in their study abstract. While such self-replicating behavior hasn't yet been observed in the wild, Ladish said, he suspects that will change as AI systems grow more capable of bypassing the security measures that restrain them. 'I expect that we're only a year or two away from this ability where even when companies are trying to keep them from hacking out and copying themselves around the internet, they won't be able to stop them,' he said. 'And once you get to that point, now you have a new invasive species.' Ladish said he believes AI has the potential to contribute positively to society. But he also worries that AI developers are setting themselves up to build smarter and smarter systems without fully understanding how they work — creating a risk, he said, that they will eventually lose control of them. 'These companies are facing enormous pressure to ship products that are better than their competitors' products,' Ladish said. 'And given those incentives, how is that going to then be reflected in how careful they're being with the systems they're releasing?'


Geeky Gadgets
13 hours ago
- Geeky Gadgets
Deepseek R1-0528: The Open Source AI Model That Could Topple Big Tech Giants
What if one model could dismantle the dominance of AI giants like OpenAI and Google? That's exactly what some are saying about Deepseek R1-0528, the open source disruptor that's sending shockwaves through the artificial intelligence industry. With performance that rivals proprietary titans like GPT-4 and Gemini 2.5 Pro—at a fraction of the cost—Deepseek isn't just another player in the game; it's rewriting the rules entirely. But this isn't just about cheaper AI. It's about a deeper shift in power, accessibility, and the very future of how we innovate. Could this be the moment where open source finally topples the closed gates of Big Tech? Wes Roth explores how Deepseek R1-0528 is reshaping the AI landscape and what it means for you. From its new cost efficiency to the ethical dilemmas it raises, this model is more than just a technological achievement—it's a statement. You'll discover the methods behind its performance, the growing divide between open source and proprietary AI, and the geopolitical tensions fueling this revolution. Whether you're a tech enthusiast, a business leader, or just curious about the future of AI, this story has implications that reach far beyond the tech world. After all, when the foundations of an industry shake, everyone feels the tremors. Deepseek's Disruptive Impact Performance That Matches Industry Leaders Deepseek R1-0528 has demonstrated its ability to compete with some of the most advanced proprietary AI models available today. Benchmark tests reveal that it outperforms Google's Gemini 2.5 Pro in specific tasks while maintaining parity with OpenAI's GPT-4 (03) in others. However, what truly sets Deepseek apart is its cost efficiency. By offering significantly lower API costs, it makes high-performance AI accessible to a broader audience, including small businesses, independent researchers, and startups. This affordability has the potential to provide widespread access to AI adoption, allowing organizations with limited budgets to use innovative technology. For you, this means access to powerful AI tools without the financial barriers often associated with proprietary systems. Open source vs. Proprietary AI: A Growing Divide The rise of Deepseek R1-0528 highlights the increasing divide between open source and proprietary AI models. Open source solutions like Deepseek challenge the business models of closed-source companies by delivering similar capabilities at a fraction of the cost. This shift could lead to a more equitable distribution of AI technology, allowing smaller players to compete with industry giants. However, it also raises concerns about the sustainability of proprietary research and development. Proprietary models often rely on substantial funding to push the boundaries of innovation, and the growing popularity of open source alternatives could disrupt this ecosystem. For you, the choice between open source and proprietary AI may come down to balancing cost, performance, and long-term support. Deepseek R1-0528 Just Broke the Entire AI Industry Watch this video on YouTube. Take a look at other insightful guides from our broad collection that might capture your interest in Deepseek. How Deepseek Achieves Its Performance The methods behind Deepseek R1-0528's impressive performance have sparked significant interest and speculation. Experts suggest that the model may have employed knowledge distillation, a technique where synthetic outputs from proprietary models like Gemini are used to train new models. This approach enables open source models to replicate the performance of their proprietary counterparts without requiring direct access to their datasets. Additionally, bioinformatics tools have been used to analyze AI outputs, providing insights into model lineage and training strategies. These innovative methods demonstrate how open source models can achieve high performance while operating within resource constraints, offering a blueprint for future development in the field. Global AI Competition: The U.S. and China Face Off The release of Deepseek R1-0528 underscores the intensifying competition between the United States and China in AI development. Historically, the U.S. has been a leader in software innovation, but China's growing focus on open source AI and its advanced hardware manufacturing capabilities could shift the balance of power. For instance, China's expertise in semiconductor production may provide it with an edge in deploying AI systems at scale. Meanwhile, U.S. policymakers are exploring strategies such as R&D tax incentives to maintain their competitive advantage. For you, this rivalry could influence the availability, affordability, and direction of AI technologies in the years to come, shaping the tools and platforms you rely on. Ethical and Strategic Challenges The rise of open source AI models like Deepseek R1-0528 brings both opportunities and challenges. On one hand, open source AI aligns with the goal of providing widespread access to access to advanced technologies, making them available to a wider audience. On the other hand, it introduces risks related to AI safety and misuse. For example, the accessibility of powerful AI tools could lead to unintended consequences, such as their use in malicious applications. Governments and private organizations in both the U.S. and China are under scrutiny for their roles in AI development. Critics question whether concerns about safety and regulation are genuine or driven by geopolitical motives, adding complexity to the ethical debate. For you, understanding these challenges is essential to navigating the evolving AI landscape responsibly. The Future of AI Innovation As global competition in AI intensifies, the stakes for innovation and collaboration continue to rise. Open source models like Deepseek R1-0528 have the potential to redefine the industry by making advanced AI capabilities more accessible and affordable. For researchers, developers, and businesses, this represents an opportunity to innovate without being constrained by proprietary systems. However, the broader implications for national security, economic policy, and ethical standards remain uncertain. The balance between fostering innovation and addressing potential risks will play a critical role in shaping the future of AI. For you, staying informed about these developments will be key to using the opportunities and addressing the challenges that lie ahead. Media Credit: Wes Roth Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


Glasgow Times
13 hours ago
- Glasgow Times
East Dunbartonshire council questioned over use of AI in school policy
Critics have hit out at the design of surveys that formed part of the process, as well as the council's use of artificial intelligence (AI) tools to analyse responses, with the local authority now facing formal complaints about the matter. As part of work to develop a new policy around smartphones in schools, officials at East Dunbartonshire Council opened online surveys for teachers, parents, secondary school students and upper-primary school pupils. READ MORE: River City did not pass value for money test, BBC Scotland boss tells MSPs Each survey, which did not collect names but did record information on the schools that young people attend, ran for around two weeks, with the council receiving a total of more than 11,000 responses across the four different groups. In order to process the survey data 'efficiently and consistently', council officers made use of several AI tools to process the contents of open text boxes in which respondents were invited to add 'any additional information' that they wished to be considered as part of the review. This material, including that produced by young children, was input to ChatGPT, Gemini AI and Microsoft Copilot, which were used to 'assist in reviewing and summarising the anonymous comments.' Officials say that this generated a 'breakdown of key messages' that were then provided to the project working group, but when asked to share the summary of survey responses claimed that this 'is not available as yet.' Asked to explain how the output of AI platforms was checked for accuracy, the council stated that cross-validation, human oversight, triangulation and bias-monitoring processes were all applied, with reviews by officials ensuring 'fidelity' to the more than 11,000 responses that were received. Officials stated that these 'safeguards' would ensure that 'the final summaries accurately reflect the breadth and nuance of stakeholder views gathered during the consultation.' However, those taking part in the survey were not informed that their information would be processed using AI platforms. The Information Commissioner's Office, which regulates areas such as data protection across the whole of the UK, told The Herald that they would expect organisations including local authorities to be "transparent' about how data is being processed, including advising of the purpose of AI tools that are to be used and explaining what the council intends to do with the outputs that are generated. The council has told The Herald that the surveys closed on 13 or 14 May, that work on a new policy began on 19 May, and that a full draft policy had been produced and submitted to the legal department by 27 May – the same day on which the council had been approached about the issue. However, material seen by The Herald shows officials advising parents that the policy had been written and submitted to the legal department by 20 May, just one day after the council claims to have begun drafting the document. An explanation has been requested from the council. READ MORE: 10 Glasgow areas set to have fireworks ban A comparison of the surveys issued to each group also confirms that a key question about was not included in the parents version of the survey, although it was present in the versions that were issued to teachers and pupils. Parents were asked the extent to which they support either a ban on phone use during lessons, or a ban on use during lessons unless their use is approved by a teacher. However, the other versions of the survey also asked explicitly whether respondents support a ban on the use of phones during the whole school day. The omission has provoked an angry response from some parents. As a result of these and other concerns, formal complaints have now been submitted to East Dunbartonshire Council alleging that the 'flawed survey information and structure' is not fit for purpose, and that the views of parents have not been fully explored or fairly represented. Commenting on behalf of the local Smartphone Free Childhood campaign group, one parent raised significant concerns about the council's approach: 'The fact that parents were the only group not asked about a full ban shocked us. But we were assured that the free text answers we gave would be properly looked at and considered. 'As a result, many parents left long, detailed and personal stories in response to this survey question. 'They shared heart-breaking stories of kids losing sleep at night after seeing things they shouldn't have. Other stories included girls and teachers being filmed without their consent - and kids being afraid to report the extent of what they're seeing in school because of peer pressure. 'There were long, careful responses outlining their concerns - where has this all gone? 'We have been told that an AI tool was used to summarise all this into five 'top-line' policy considerations. We're not sure if the rest was looked at? 'Not only is it not good enough - it's a betrayal of parents who have trusted the council to listen to their concerns. 'It's also not clear how they've shared and processed these highly personal responses from parents, children and teachers - some containing identifiable details, to an unknown 'AI platform' without our consent. We don't know who can access the data.' The Herald contacted East Dunbartonshire Council asking whether the information in the open text boxes was checked for personal or identifying details before being submitted to AI systems. Officials were also asked to provide a copy of the council's current policy on AI use. The response received from the council did not engage with these queries. We also asked why the council had given two different dates in response to questions about when its new draft policy was completed, and whether the council has provided false information as a consequence. READ MORE: 'Fun police': Decision made on the selling of ice cream in Glasgow parks A spokesperson insisted that "the draft policy was formally submitted to Legal on 27 May for consideration" and asked to be provided with evidence suggesting otherwise so that they could investigate. Finally, the council was asked to explain why the surveys for pupils and teachers included an explicit question about full bans on smartphones during the school day. Their spokesperson said: "The pupil survey included a specific question on full day bans to gather targeted data from young people. The working group which consisted of Head Teachers, Depute Head Teachers, Quality Improvement Officers and an EIS representative, felt that the young people may be less likely to leave an additional comment in the open text box and so wanted to explicitly ask this question. Parents were intentionally given an open text box to avoid steering responses and to allow respondents to freely express their views. The open text box was used by parents to express their view on a full day ban which many did."