logo
AI can help boost our clean energy ambitions

AI can help boost our clean energy ambitions

The National31-01-2025

The impact of artificial intelligence (AI) was inescapable at this year's World Economic Forum Annual Meeting. Whether dialogues were focused on the global economic outlook, the future of the labour force, or the energy transition, the spectre of AI was ever present. And it's not hard to see why. With breakthrough technologies and innovations making their way from research labs to factories, supply and value chains, we are on the brink of a new era of social, economic and human possibility. Harnessing and integrating these technological breakthroughs into the global energy system will be crucial to unlocking this immense potential. This consensus from the Swiss mountains in Davos was also echoed at the International Day of Clean Energy. At a dedicated session hosted at the UAE pavilion at the annual meeting on January 22, global leaders who had gathered underscored the need for greater ambition and faster adoption of technologies to meet the UAE Consensus goals of tripling renewable energy capacity and doubling energy efficiency by 2030. With 2024 marking the first time that the 1.5 °C warming threshold has been breached for the year on average, long-term solutions are as desperately needed, as our time to deploy them is short. Right now, the research suggests that, if scaled at the right pace, digital technologies can reduce emissions by 20 per cent by 2050 in the three highest-emitting sectors: energy, materials and mobility. That represents a significant portion of the emissions reductions needed to keep a 1.5 °C future in our sights. There is, however, a dichotomy at the centre of the AI-energy revolution. The more AI technologies and tools that are developed, the higher the demand for the energy that supports them will grow. Thus, energy demand is only set to increase the more we depend on AI tools and networks. A World Economic Forum report from the start of 2024 pointed out that AI's energy use is currently estimated to be around 2-3 per cent of total global emissions. But that is likely to change rapidly as more companies, governments and organisations use AI to drive efficiency and productivity. And, when we realise that AI, and in particular generative AI systems might use up to 33 times more energy to complete a task than task-specific software would, it also comes with a renewed sense to make our energy supplies clean and renewable. For now, the early signs of AI's affect on the energy sector are encouraging. We are already seeing a wave of technological disruption reshaping the energy system – from transforming renewable energy generation and fundamentally altering how energy is consumed across end-use sectors. On the supply side, innovations such as advanced solar photovoltaic systems, offshore wind turbines, and next-generation grid-scale battery storage are enabling cleaner, more efficient energy production while improving grid reliability. When transmitted to end-use sectors like transport, buildings, and industry, technologies such as green hydrogen, smart grids, and electrification are driving significant reductions in carbon emissions. What's more, digital tools such as AI and blockchain are optimising energy efficiency and facilitating the integration of decentralised renewable energy systems. This tech-driven energy revolution is simultaneously transforming how we produce and consume energy, and creating a new development pathway that prioritises clean, affordable and accessible energy. The integration of AI with renewables is also helping enhance community resilience in vulnerable regions. For example, machine learning algorithms are being used to optimise microgrids, ensuring uninterrupted power supplies during extreme weather events. AI-powered predictive maintenance tools are reducing downtime in solar and wind facilities, while advanced forecasting models improve energy storage and grid balancing to accommodate variability in renewable energy sources. Such digital solutions are critical in scaling renewable energy systems globally and ensuring they are resilient to disruptions. The UAE, as a global leader in renewable energy, is already demonstrating how AI can be harnessed to achieve energy resilience and security. For instance, the Mohammed bin Rashid Al Maktoum Solar Park uses AI to optimise solar panel cleaning schedules and enhance energy output. The UAE has also integrated AI into its energy planning systems, enabling real-time monitoring of grid performance and predictive analytics to mitigate potential outages. Through initiatives such as Masdar City and partnerships with global technology leaders, the UAE is using AI to drive efficiencies, reduce emissions, and future-proof its energy systems. The convergence of AI and the energy transition presents an unprecedented opportunity to tackle the twin challenges of decarbonisation and growing energy demand. However, realising this potential requires co-ordinated global action. Policymakers must prioritise investments in clean energy technologies, while industry stakeholders adopt AI solutions responsibly to ensure they align with sustainability goals. The UAE is showing what is possible when innovation meets ambition, but it cannot do this alone. If we want to keep 1.5°C within reach and create a future powered by clean, reliable energy, the time to act is now. AI, when integrated thoughtfully and equitably into energy systems, can help us achieve a transformative and sustainable future. But only if we commit to scaling the technologies and policies that make it possible.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

HCLTech And UiPath Collaborate On Agentic Automation
HCLTech And UiPath Collaborate On Agentic Automation

Channel Post MEA

time2 hours ago

  • Channel Post MEA

HCLTech And UiPath Collaborate On Agentic Automation

HCLTech and UiPath have announced a strategic partnership to accelerate agentic automation for UiPath customers globally. The partnership will drive large-scale transformation for enterprises across industries, enabling more intelligent and self-sufficient business process operations that require minimal human intervention. HCLTech will leverage its AI expertise to deploy the UiPath Platform, enabling autonomous operations in finance, supply chain, procurement, customer service, marketing and human resources. HCLTech will support this partnership with pre-configured AI agents and controls to ensure seamless deployment and scalability. The partnership aims to enhance business agility, optimize workforce efficiency and deliver faster returns on business process automation investments for global enterprises. HCLTech will also establish an AI Lab with UiPath in India to develop Industry Focused Repeatable Solutions (IFRS) and MVPs for the full automation lifecycle, from strategy to implementation and continuous optimization. HCLTech will leverage its global delivery model to support UiPath customers in North America, Europe and Asia-Pacific. 'As we shift towards a new era with Agentic AI, agentic automation will be critical to provide businesses with the speed and agility to transform operations and unlock new business potential. Partnering with HCLTech allows UiPath to extend the power of its AI-powered automation to enterprises globally, accelerating intelligent transformation at scale. With HCLTech's deep expertise in AI, automation and industry solutions, UiPath customers will benefit from best-in-class implementation and business impact,' said Ashim Gupta, Chief Operating Officer and Chief Financial Officer, UiPath. 'By co-creating next-gen AI-powered solutions with UiPath, HCLTech is setting new benchmarks for agentic autonomous operations that unlock unprecedented efficiency, agility and innovation for enterprises. Our proven expertise in hyperautomation, AI and cloud-first architectures helps us provide industry-specific and advanced automation solutions at scale,' said Raghu Kidambi, Corporate Vice President and Global Head, Digital Process Operations, HCLTech.

Artificial Intelligence in cybersecurity: savior or saboteur?
Artificial Intelligence in cybersecurity: savior or saboteur?

Khaleej Times

time3 hours ago

  • Khaleej Times

Artificial Intelligence in cybersecurity: savior or saboteur?

Artificial intelligence has rapidly emerged as both a cornerstone of innovation and a ticking time bomb in the realm of cybersecurity. Once viewed predominantly as a force for good, enabling smarter threat detection, automating incident responses, and predicting attacks before they happen — AI has now taken on a double-edged role. The very capabilities that make it invaluable to cybersecurity professionals are now being exploited by cybercriminals to launch faster, more convincing, and more damaging attacks. From phishing emails indistinguishable from real business correspondence to deepfake videos that impersonate CEOs and public figures with chilling accuracy, AI is arming attackers with tools that were previously the stuff of science fiction. And as large language models (LLMs), generative AI, and deep learning evolve, the tactics used by bad actors are becoming more scalable, precise, and difficult to detect. 'The threat landscape is fundamentally shifting,' says Sergey Lozhkin, Head of the Global Research & Analysis Team for the Middle East, Türkiye, and Africa at Kaspersky. 'From the outset, cybercriminals began using large language models to craft highly convincing phishing emails. Poor grammar and awkward phrasing — once dead giveaways are disappearing. Today's scams can perfectly mimic tone, structure, and professional language.' But the misuse doesn't stop at email. Attackers are now using AI to create fake websites, generate deceptive images, and even produce deepfake audio and video to impersonate trusted figures. In some cases, these tactics have tricked victims into transferring large sums of money or divulging sensitive data. According to Roland Daccache, Senior Manager – Sales Engineering at CrowdStrike MEA, AI is now being used across the entire attack chain. 'Generative models are fueling more convincing phishing lures, deepfake-based social engineering, and faster malware creation. For example, DPRK-nexus adversary Famous Chollima used genAI to create fake LinkedIn profiles and résumé content to infiltrate organisations as IT workers. In another case, attackers used AI-generated voice and video deepfakes to impersonate executives for high-value business email compromise (BEC) schemes.' The cybercrime community is also openly discussing how to weaponize LLMs for writing exploits, shell commands, and malware scripts on dark web forums, further lowering the barrier of entry for would-be hackers. This democratisation of hacking tools means that even novice cybercriminals can now orchestrate sophisticated attacks with minimal effort. Ronghui Gu, Co-Founder of CertiK, a leading blockchain cybersecurity firm, highlights how AI is empowering attackers to scale and personalize their strategies. 'AI-generated phishing that mirrors human tone, deepfake technology for social engineering, and adaptive tools that bypass detection are allowing even low-skill threat actors to act with precision. For advanced groups, AI brings greater automation and effectiveness.' On the technical front, Janne Hirvimies, Chief Technology Officer of QuantumGate, notes a growing use of AI in reconnaissance and brute-force tactics. 'Threat actors use AI to automate phishing, conduct rapid data scraping, and craft malware that adapts in real time. Techniques like reinforcement learning are being explored for lateral movement and exploit optimisation, making attacks faster and more adaptive.' Fortifying Cyber Defenses To outsmart AI-enabled attackers, enterprises must embed AI not just as a support mechanism, but as a central system in their cybersecurity strategy. 'AI has been a core part of our operations for over two decades,' says Lozhkin. 'Without it, security operations center (SOC) analysts can be overwhelmed by alert fatigue and miss critical threats.' Kaspersky's approach focuses on AI-powered alert triage and prioritisation through advanced machine learning, which filters noise and surfaces the most pressing threats. 'It's not just about automation — it's about augmentation,' Lozhkin explains. 'Our AI Technology Research Centre ensures we pair this power with human oversight. That combination of cutting-edge analytics and skilled professionals enables us to detect over 450,000 malicious objects every day.' But the AI evolution doesn't stop at smarter alerts. According to Daccache, the next frontier is agentic AI — a system that can autonomously detect, analyze, and respond to threats in real time. 'Traditional automation tools can only go so far,' Daccache says. 'What's needed is AI that thinks and acts — what we call agentic capabilities. This transforms AI from a passive observer into a frontline responder.' CrowdStrike's Charlotte AI, integrated within its Falcon platform, embodies this vision. It understands security telemetry in context, prioritises critical incidents, and initiates immediate countermeasures, reducing analyst workload and eliminating delays during high-stakes incidents. 'That's what gives defenders the speed and consistency needed to combat fast-moving, AI-enabled threats,' Daccache adds. Gu believes AI's strength lies in its ability to analyze massive volumes of data and identify nuanced threat patterns that traditional tools overlook. 'AI-powered threat detection doesn't replace human decision-making — it amplifies it,' Gu explains. 'With intelligent triage and dynamic anomaly detection, AI reduces response time and makes threat detection more proactive.' He also stresses the importance of training AI models on real-world, diverse datasets to ensure adaptability. 'The threat landscape is not static. Your AI defenses shouldn't be either,' Gu adds. At the core of any robust AI integration strategy lies data — lots of it. Hirvimies advocates for deploying machine learning models across SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) platforms. 'These systems can correlate real-time threat intelligence, behavioral anomalies, and system events to deliver faster, more precise responses,' he says. 'Especially when it comes to detecting novel or stealthy attack patterns, machine learning makes the difference between catching a threat and becoming a headline.' Balancing Innovation with Integrity While AI can supercharge threat detection, response times, and threat simulations, it also brings with it the potential for misuse, collateral damage, and the erosion of privacy. 'Ethical AI use demands transparency, clear boundaries, and responsible data handling,' says Lozhkin.'Organisations must also ensure that employees are properly trained in the safe use of AI tools to avoid misuse or unintended exposure to threats.' He highlights Kaspersky's Automated Security Awareness Platform, which now includes dedicated sections on AI-assisted threats and responsible usage, reflecting the company's commitment to proactive education. When AI is deployed in red teaming or simulated cyberattacks, the risk matrix expands. Gu warns that AI systems, if left unchecked, can make decisions devoid of human context, potentially leading to unintended and widespread consequences. 'Ethical AI governance, robust testing environments, and clearly defined boundaries are essential,' he says, underlining the delicate balance required to simulate threats without crossing into unethical territory. Daccache emphasises the importance of a privacy-first, security-first approach. 'AI must be developed and operated with Privacy-by-Design and Secure-by-Design principles,' he explains. 'This extends to protecting the AI systems themselves — including their training data, operational logic, and outputs—from adversarial manipulation.' Daccache also points to the need for securing both AI-generated queries and outputs, especially in sensitive operations like red teaming. Without such safeguards, there's a real danger of data leakage or misuse. 'Transparency, accountability, and documentation of AI's capabilities and limitations are vital, not just to build trust, but to meet regulatory and ethical standards,' he adds. Despite AI's growing autonomy, human oversight remains non-negotiable. 'While AI can accelerate simulations and threat detection, it must be guided by skilled professionals who can interpret its actions with context and responsibility,' says Daccache. This human-AI collaboration ensures that the tools remain aligned with organisational values and ethical norms. Hirvimies rounds out the conversation with additional cautionary notes: 'Privacy violations, data misuse, bias in training datasets, and the misuse of offensive tools are pressing concerns. Transparent governance and strict ethical guidelines aren't optional, they're essential.' Balancing the Equation While AI promises speed, scale, and smarter defense mechanisms, experts caution that an over-reliance on these systems, especially when deployed without proper calibration and oversight — could expose organisations to new forms of risk. 'Absolutely, over-reliance on AI can backfire if systems are not properly calibrated or monitored,' says Lozhkin. 'Adversarial attacks where threat actors feed manipulated data to mislead AI are a growing concern. Additionally, AI can generate false positives, which can overwhelm security teams and lead to alert fatigue. To avoid this, companies should use a layered defence strategy, retrain models frequently, and maintain human oversight to validate AI-driven alerts and decisions.' This warning resonates across the cybersecurity landscape. Daccache echoes the concern, emphasising the need for transparency and control. 'Over-relying on AI, especially when treated as a black box, carries real risks. Adversaries are already targeting AI systems — from poisoning training data to crafting inputs that exploit model blind spots,' he explains. 'Without the right guardrails, AI can produce false positives or inconsistent decisions that erode trust and delay response.' Daccache stresses that AI must remain a tool that complements — not replaces—human decision-making. 'AI should be an extension of human judgement. That requires transparency, control, and context at every layer of deployment. High-quality data is essential, but so is ensuring outcomes are explainable, repeatable and operationally sound,' he says. 'Organisations should adopt AI systems that accelerate outcomes and are verifiable, auditable and secure by design.' Gu adds that blind spots in AI models can lead to serious lapses. 'AI systems are not infallible,' he says. 'Over-reliance can lead to susceptibility to adversarial inputs or overwhelming volumes of false positives that strain human analysts. To mitigate this, organizations should adopt a human-in-the-loop approach, combine AI insights with contextual human judgment, and routinely stress-test models against adversarial tactics.' Gu also warns about the evolving tactics of bad actors. 'An AI provider might block certain prompts to prevent misuse, but attackers are constantly finding clever ways to circumvent these restrictions. This makes human intervention all the more important in companies' mitigation strategies.' Governing the Double-Edged Sword As AI continues to embed itself deeper into global digital infrastructure, the question of governance looms large: will we soon see regulations or international frameworks guiding how AI is used in both cyber defense and offense? Lozhkin underscores the urgency of proactive regulation. 'Yes, there should definitely be an international framework. AI technologies offer incredible efficiency and progress, but like any innovation, they carry their fair share of risks,' he says. 'At Kaspersky, we believe new technologies should be embraced, not feared. The key is to fully understand their threats and build strong, proactive security solutions that address those risks while enabling safe and responsible innovation.' For Daccache, the focus is not just on speculative regulation, but on instilling foundational principles in AI systems from the start. 'As AI becomes more embedded in cybersecurity and digital infrastructure, questions around governance, risk, and accountability are drawing increased attention,' he explains. 'Frameworks like the GDPR already mandate technology-neutral protections, meaning what matters most is how organizations manage risk not whether AI is used.' Daccache emphasises that embedding Privacy-by-Design and Secure-by-Design into AI development is paramount. 'To support this approach, CrowdStrike offers AI Red Teaming Services, helping organisations proactively test and secure their AI systems against misuse and adversarial threats. It's one example of how we're enabling customers to adopt AI with confidence and a security-first mindset.' On the other hand, Gu highlights how AI is not only transforming defensive mechanisms but is also fuelling new forms of offensive capabilities. 'As AI becomes integral to both defence and offense in cyberspace, regulatory frameworks will be necessary to establish norms, ensure transparency, and prevent misuse. We expect to see both national guidelines and international cooperation similar to existing cybercrime treaties emerge to govern AI applications, particularly in areas involving privacy, surveillance, and offensive capabilities.' Echoing this sentiment, Hirvimies concludes by saying that developments are already underway. 'Yes. Regulations like the EU AI Act and global cyber norms are evolving to address dual-use AI,' he says. 'We can expect more international frameworks focused on responsible AI use in cyber defence, limits on offensive AI capabilities, and cross-border incident response cooperation. At QuantumGate, we've designed our products to support this shift and facilitate compliance with the country's cryptography regulations.'

China's export curbs on rare earth minerals worry Europe
China's export curbs on rare earth minerals worry Europe

Gulf Today

time4 hours ago

  • Gulf Today

China's export curbs on rare earth minerals worry Europe

China is flexing its economic muscle in more senses than one. Its decision to restrict exports of the rare minerals and magnets has sent shivers down the spines of global automakers in Germany and in the United States. Rare earth minerals are needed in key sectors like car manufactures, semi-conductor industry, aerospace industry. The Chinese possess half of the global rare earth mineral reserves. The Chinese decision was mainly to counter US President Donald Trump's refusal to export crucial computer chips needed for AI, and the US' refusal to allow imports from Chinese chipmaker Huawei. More importantly, what has angered the Chinese is the refusal of student visas to Chinese students, and the cancellation of the visas of those who are students in American universities. Europe is caught in a crossfire in the trade war between the giants, the US and China. Europeans, as well as Americans, have suddenly realised that it is not such a good thing to depend on China for either rare earth minerals or even manufactured goods including cars. The West is desperately looking to reduce its dependence on China and it is seeking to diversify its supply chain. European Union (EU) Commissioner for Industrial Strategy Stephane Sejourne said, 'We must reduce our dependencies on all countries, particularly on a number of countries like China, on which we are more than 100 per cent dependent.' Meanwhile EU Trade Commissioner Maros Sefcovic said that he was in touch with his Chinese counterpart and they had agreed to clarify the rare earth minerals situation. Major automakers like Mercedes and BMW are claiming that they have enough inventories and their production schedules will not be affected. But it is clear that shortages are looming on the horizon. The US-China trade talks are quite crucial, even as American President Donald Trump wrote on his social media platform, Truth Social, 'I like President Xi of China, always have, and always will, but he is VERY TOUGH, AND EXTREMELY HARD TO MAKE A DEAL WITH.' Trump is blaming China for breaking the deal made in Geneva to roll back the tariffs that each side had imposed on the other, Trump reduced the tariffs from 145 per cent to 30 per cent, and China reduced it from 135 per cent. Mercedes-Benz production chief Jeorg Burzer said he was talking to top suppliers of the company about building 'buffer' stocks anticipating trouble in the future. He said that Mercedes is well stocked for now and production schedules are not affected. But automakers in Europe and America have aired their worries about the curbs on Chinese exports of the rare earth minerals. Many of the captains of industry are lobbying with their governments to solve the deadlock. Wolfgang Weber, CEO of Germany's electrical and digital industry association, ZVEI, said in an emailed statement, 'Companies currently feel abandoned by politicians and are partly looking for solutions to their difficult situation on their own in China.' Trump's declaration of the tariff war against America's trade partners had mostly caused quiet murmurs, and many countries from the EU and other countries like Japan are trying to work out a trade deal without crossing swords with the US. But China was not willing to take Trump's tariffs passively. It is aware that it has power enough to counter American tariffs with tariffs of its own. The Chinese have always been defiant of the Western world, even when they did not have the economic power they now have. There is of course the harsh fact. China needs the Western countries to maintain its economic growth. It is its exports to Western countries that have made it rich and powerful. Europe and US need the cheap labour of China, and China needs the Western markets. They have to strike a deal with each other.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store