
Google Deepmind CEO says global AI cooperation 'difficult'
Artificial intelligence pioneer and head of Google Deepmind's CEO Demis Hassabis on Monday said that greater international cooperation around AI regulation was needed but "difficult" to achieve "in today's geopolitical context".At a time when AI is being integrated across all industries, its uses have raised major ethical questions, from the spread of misinformation to its impact on employment, or the loss of technological control.At London's South by Southwest (SXSW) festival on Monday, Hassabis, who has won a Nobel Prize in Chemistry for his research on AI, also addressed the challenges that artificial general intelligence (AGI) -- a technology that could match and even surpass human capability -- would bring.
"The most important thing is it's got to be some form of international cooperation because the technology is across all borders. It's going to get applied to all countries," Hassabis said.
"Many, many countries are involved in researching or building data centres or hosting these technologies. So I think for anything to be meaningful, there has to be some sort of international cooperation or collaboration and unfortunately that's looking quite difficult in today's geopolitical context," he said.
At Paris's AI summit in February, 58 countries -- including China, France, India, the European Union and the African Union Commission -- called for enhanced coordination on
AI governance
.
But the US warned against "excessive regulation", with US Vice President JD Vance saying it could "kill a transformative sector".
Alongside the US, the UK refused to sign the summit's appeal for an "open", "inclusive" and "ethical" AI.
Hassabis on Monday advocated for the implementation of "smart, adaptable regulation" because "it needs to kind of adapt to where the technology ends up going and what the problems end up being".
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
26 minutes ago
- Time of India
Building resilient AI strategies for Indian enterprises
Artificial intelligence ( AI ) is progressively transforming businesses around the world, and India is not far behind. The technology is expected to bring about previously untapped economies of scale in the country and boost innovation and productivity. By 2035, AI is expected to have a potential economic impact on India of USD 15.7 trillion while creating nearly 400,000 new jobs, illustrating the technology's breakthrough potential in Indian workplaces. This makes the jobs of C-suite executives and industry leaders extremely critical when deploying AI in workflows, as the goal should be to balance innovation, ethics, and improved security. Cyber Resilience in the AI Era However, due to the increased reliance on AI, cybercrime and its added complexities have grown. Data shows 95 entities were attacked in 2024, making India the second most targeted country. Thus, security has emerged as the highest priority in developing and implementing new AI technologies. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch CFD với công nghệ và tốc độ tốt hơn IC Markets Đăng ký Therefore, organizations must create a coordinated defense plan that integrates proactive security measures into their systems and procedures to maintain safety standards. One way to do this is by using generative AI (Gen AI) security systems, specifically autonomous AI systems or agentic AI. As these systems are designed to reduce human error, prioritize risks, and automate alarm analysis, they significantly increase overall security resilience. The inherent ability of such systems to rapidly process massive amounts of data makes them ideal for pre-emptively detecting attacks and reacting to them in real-time. Creating a Governance Framework Live Events Another form of security lapse is the unauthorized use of AI tools within an organization. Such lapses may result in data breaches, misinformation, compliance violations, and security vulnerabilities—issues that will define the AI era. To avoid such scenarios, having an AI governance framework in place is key. Such frameworks should include routine monitoring of the AI tools used to ensure compliance with regulatory standards. The European Union (EU) can inspire such a structure, having set the norm for AI regulation with its AI Act of 2024. A change of mindset in the way security is viewed is key. Security should be a top priority for leaders in all departments, not just those in IT. One way is a cross-functional leadership team comprising security leaders, C-suite executives, engineers, developers, data management, security operations, IT operations, and incident response teams. This team can subsequently collaborate to establish a cyber resilience strategy while referencing pre-existing frameworks. This collaborative environment also empowers employees to understand security measures, which goes a long way in providing them with security measures. As a result, this boosts the organization's overall security framework and sends a strong message that security is the top-most priority. Innovation with Accountability Recent reports have stated that India's AI market will grow at a CAGR of 25-35%, reinforcing the country's innovation and job creation potential. The government has started initiatives such as the IndiaAI Mission and is also actively funding leading universities to develop solutions that address privacy concerns and cyber threats. Therefore, the key should focus on establishing a governance framework that encourages innovation while ensuring accountability. Businesses must develop and implement a framework that offers users a responsible and ethical system. Just as sustainability has become integral to decision-making, leaders must go beyond mere compliance checkboxes and take an active role in shaping AI ethics. The future depends on leadership combining technology with the requisite guardrails, allowing India to pioneer a path to innovation, growth, and responsible adoption of AI. The author is CTO of Hexaware Technologies .


Indian Express
34 minutes ago
- Indian Express
How AI can be a solution — not a problem — in the fight against climate change
written by Zenin Osho In Maharashtra's drought-prone Baramati district, sugarcane farmers have long faced a tough trade-off: Maximise yields or conserve water. Now, with the help of artificial intelligence (AI), they are managing to do both. Farmers are using AI-driven predictions to optimise irrigation schedules leading to a 30 per cent reduction in water use. Crucially, it has also cut electricity costs for farmers by around 25 per cent, since less water means less reliance on power-hungry pumps. It hints at a broader truth: AI, despite concerns over its energy use, can help drive real-world climate solutions by making industries leaner, cheaper, and greener. Much of the anxiety around AI stems from its growing appetite for electricity. Training large models consumes roughly 10 times more energy than a traditional web search. Greenhouse gas emissions from big technology companies have risen by nearly a third in recent years. With vast new data centres being built, further increases seem inevitable. Yet the alarmism is often misplaced. In absolute terms, AI remains a relatively modest consumer of energy. According to the IEA, data centres account for about 1.5 per cent of global electricity use today, and that figure could double by 2030. But most of it is driven by streaming, social media and e-commerce, not AI. Even if AI's share grows sharply, its potential to decarbonise some of the hardest-to-abate industries — while tackling both carbon and short-lived climate pollutants like methane — is becoming increasingly difficult to ignore. Take methane, for instance. Although less notorious than carbon dioxide, methane is a far more potent, if shorter-lived, greenhouse gas. Tackling it quickly could offer major climate gains. AI-powered startups are already rising to the challenge. GHGSat, for example, uses satellites equipped with advanced spectrometers and machine learning to detect facility-level methane leaks invisible to conventional monitoring. Livestock, particularly cattle, are another major methane source. Startups like Rumin8 and Symbrosia are developing AI-informed feed supplements that curb emissions from digestion. Meanwhile, DSM-Firmenich's Bovaer, now approved for use in over 55 countries, can slash methane emissions from cattle by more than 30 per cent. Agriculture offers further opportunity: Flooded paddy fields, which produce significant methane, could also benefit from AI. Just as AI tools are helping sugarcane farmers in Baramati optimise irrigation and cut water use, similar approaches could reduce flooding periods in rice cultivation — lowering methane emissions while conserving water. AI's promise in modernising energy systems is only just beginning to be realised. Use cases in renewable energy integration remain early, but encouraging signs are emerging. In the United States, Alphabet's Tapestry project, combining AI and cloud technologies, is helping grid operators automate the sluggish approval process for clean energy projects — speeding the deployment of wind and solar power. Similar challenges, albeit on a larger scale, loom in India. Integrating intermittent renewables into ageing, stressed grids remains complex. Distribution companies (discoms), which are entities responsible for buying electricity from generating companies and distributing it to end-consumers across different areas, many of which are financially strained, face acute difficulties in adopting new technologies. Yet, AI offers powerful tools. It can improve demand forecasting, optimise grid load balancing, predict faults before they cascade, and automate grid planning, significantly expediting renewable integration. Crucially, Indian startups such as Ambee, Atsuya, and Sustlabs are actively deploying AI and IoT for sustainable energy solutions. Given India's ambitious goal of adding 500GW of non-fossil capacity by 2030, these efficiencies are simply no longer optional. While widespread AI adoption among discoms may still seem distant, the potential gains — in reduced losses, enhanced reliability, and lower costs — make a compelling case for phased, strategic deployment, supported by policy reform and investment. Batteries, too, are critical to this transition. The ability to store renewable energy when the sun does not shine or the wind does not blow remains a bottleneck. Quantum computing, closely linked to advances in AI, offers a tantalising possibility. By simulating new battery materials, such as lithium nickel oxide, at the atomic level, researchers hope to design cheaper, longer-lasting storage solutions, accelerating the shift to a cleaner grid. Lithium nickel oxide is a promising material that could enable batteries with higher-energy density and lower costs compared to conventional lithium-ion designs. Teams at Sandia National Laboratories and Google Quantum AI are already using quantum simulations to accelerate battery research. They are also applying quantum techniques to improve modelling of fusion reactions, potentially unlocking a future of abundant and carbon-free energy. Industrial sectors that have long resisted decarbonisation are also beginning to show signs of change. Cement manufacturing, responsible for around 8 per cent of global emissions, is deploying AI to optimise kiln operations, cutting fuel use and emissions. In shipping, AI-driven navigation systems analyse real-time data on weather patterns and ocean currents to chart more efficient routes, saving time, fuel, and carbon. Startups are crucial in pushing these frontiers. Their agility and willingness to bet on unproven ideas give them an edge over slower-moving incumbents. Startups need deep ecosystem support, including patient capital, reliable infrastructure, expert mentorship, and clear regulatory pathways. Initiatives like Google's startup programs provide a template, offering access to advanced AI models, cloud computing resources, and tailored guidance to help founders navigate technological and policy hurdles. The government's role in strategic investment in R&D, targeted support for climate-focused startups, and regulatory frameworks that encourage innovation without creating unnecessary barriers are all essential. Transparency on AI's environmental impact is critical. From 2026, the European Union will require companies to report AI-related energy consumption; other jurisdictions should adopt similar measures. Data centres must evolve as well, shifting workloads to match renewable generation, investing in battery storage, and aiming for 24/7 carbon-free operations. Big technology firms should leverage their considerable purchasing power to accelerate the build-out of clean energy infrastructure, rather than relying primarily on offsets. Combating climate change demands we tackle both carbon and super-pollutants like methane. While concerns about AI's energy footprint are valid, its powerful potential for deep decarbonisation and systemic change is undeniable. If policymakers, investors, scientists, and entrepreneurs unite, AI can transform from a perceived climate problem into one of our most potent solutions, with startups already blazing the trail towards a new era of innovation that matches the urgency of the challenge ahead. The writer is Director, India Program of the Institute for Governance & Sustainable Development (IGSD)


Mint
34 minutes ago
- Mint
Parmy Olson: AI chatbots could become advertising vehicles
Chatbots might hallucinate and flatter their users too much, but at least their subscription model is healthy for our well-being. Many Americans pay about $20 a month to use the premium versions of OpenAI's ChatGPT, Google's Gemini Pro or Anthropic's Claude. The result is that the products are designed to provide maximum utility. Don't expect this status quo to last. Subscription revenue has a limit. Even the most popular models are under pressure to find new revenue streams. Unfortunately, the most obvious one is advertising—the web's most successful business model. AI builders are already exploring ways to plug more ads into their products, and while that's good for their bottom lines, it also means we're about to see a new chapter in the attention economy that fuelled the internet. Also Read: Rahul Matthan: Brace for a wave of AI-enabled criminal enterprise If social media's descent into engagement-bait is any guide, the consequences will be profound. One cost is addiction. OpenAI says a cohort of 'problematic' ChatGPT users are hooked on the tool. Putting ads into ChatGPT, which now has more than 500 million active users, won't spur the company to help those people reduce their use of the product. Quite the opposite. Advertising was the reason companies like Mark Zuckerberg's Meta designed algorithms to promote engagement and keep users scrolling so they saw more ads and drove more revenue. It's the reason behind the so-called 'enshittification' of the web, a place now filled with clickbait and social media posts that spark outrage. Baking such incentives into AI will almost certainly lead its designers to find ways to trigger more dopamine spikes, perhaps by complimenting users even more, asking personal questions to get them talking for longer or even cultivating emotional attachments. Millions in the Western world already view chatbots in apps like Chai, Talkie, etc, as friends or romantic partners. Imagine how persuasive such software could be when its users are beguiled. Imagine a person telling their AI they're feeling depressed, and the system recommending some holiday destinations or medication to address the problem. Also Read: Friend or phone: AI chatbots could exploit us emotionally Is that how ads would work in chatbots? The answer is subject to much experimentation. Google's ad network, for instance, recently started putting ads in third-party chatbots. Chai, a romance and friendship chatbot, serves pop-up ads. The AI answer engine Perplexity displays sponsored questions. After an answer to a question about job hunting, for instance, it might include a list of suggested follow-ups, including, at the top, 'How can I use Indeed to enhance my job search?" Perplexity's CEO Aravind Srinivas told a podcast in April that the company was looking to go further by building a browser to 'get data even outside the app" to enable what he called 'hyper-personalized" ads. For some apps, that may mean weaving ads directly into conversations, using the intimate details shared by users to predict and potentially even manipulate them into wanting something, then selling those intentions to the highest bidder. Also Read: Netflix's 'Adolescence' and the cost of profits: Why kids online are not okay Researchers at Cambridge University referred to this as the forthcoming 'intention economy" in a recent paper, with chatbots steering conversations toward a brand or even a direct sale. As evidence, they pointed to a 2023 blog post from OpenAI calling for 'data that expresses human intention" to help train its models, a similar effort from Meta and Apple's 2024 developer framework that helps apps work with Siri to 'predict actions someone might take in the future." As for OpenAI's Sam Altman, nothing says 'we're building an ad business" like poaching Fidji Simo, the person who built delivery app Instacart into an advertising powerhouse, to help OpenAI 'scale as we enter a next phase of growth." In Silicon Valley parlance, to 'scale' often means to quickly expand your user base by offering a service for free, with ads. Tech companies will inevitably claim that advertising is a necessary part of democratizing AI. But we have seen how 'free' services cost people their privacy, autonomy and even their mental health. AI knows more about us than Google or Facebook ever did—details about our health concerns, relationship issues and work. In two years, chatbots have also built a reputation as trustworthy companions and arbiters of truth. When people trust artificial intelligence that much, they're more vulnerable to targeted manipulation. AI advertising should be regulated before it becomes too entrenched, or we'll repeat the mistakes made with social media—scrutinizing the fallout of a lucrative business model only after the damage is done. ©Bloomberg The author is a Bloomberg Opinion columnist covering technology.