logo
Powering the AI revolution: Solving the energy and sustainability puzzle of data centers

Powering the AI revolution: Solving the energy and sustainability puzzle of data centers

Time of India3 days ago

In a world increasingly driven by data and digital intelligence, the race to build bigger and faster
AI
models is fundamentally reshaping the
energy
landscape. From powering predictive algorithms to training generative models like ChatGPT,
Artificial Intelligence
(AI) is no longer just a tool—it's the engine behind the modern economy. But this engine has a growing appetite, and at its core lies one of the most energy-intensive infrastructures of the digital age: the data center.
The Surge of AI—and Its Energy Wake
The AI boom is rewriting the rules of infrastructure planning. According to the International Energy Agency (IEA), global data center electricity consumption could double by 2026, reaching over 1,000 terawatt-hours (TWh)—roughly comparable to the annual electricity demand of Japan. This spike is being driven not only by hyperscale cloud providers, but also by a new generation of AI workloads that require massive computing power and near-continuous uptime.
A single AI query can require 10 times more energy than a typical web search. With audio-visual generative tools on the rise, the pressure on energy systems will only intensify. Already, data centers consume about 1.5% of global electricity and contribute to 1% of energy-related greenhouse gas emissions—figures expected to rise sharply as AI adoption scales.
The Power Trilemma: Energy, Carbon, and Water
Behind every AI interaction is a complex physical footprint: rows of processors, high-density cooling systems, and vast power supply chains. Nearly 40% of a data center's electricity goes to computing, and another 40% to cooling. But energy and emissions aren't the only concerns—water use is an emerging and often underappreciated pressure point.
Consider this: a 20-question AI session can indirectly consume about 500ml of water, primarily for cooling. In the U.S., mid-sized data centers can withdraw up to 300,000 gallons of water a day—enough to meet the daily needs of 100,000 households. By 2027, AI-related water withdrawals could reach up to 6.6 billion cubic meters globally, compounding the environmental burden in already water-stressed regions.
This growing trilemma—energy intensity, carbon emissions, and water use—is pushing the industry to rethink how data centers are powered and cooled.
Redefining the Energy Stack: Smarter Solutions for a more sustainable AI
To support AI's exponential growth while meeting
sustainability
targets, the industry is turning to next-generation energy technologies. These innovations are enabling resilient, efficient, and lower-carbon emission power systems—a crucial shift for an AI-driven future.
https://www.youtube.com/watch?v=7I8Szv2W-6Y
1. High-Efficiency Turbines for More Sustainable Power
Flexible gas turbine technologies are gaining traction for their ability to help deliver high power -density at a lower-carbon power. GE Vernova's aeroderivative gas turbines, for example, are engineered for high performance with lower emissions, offering the ability to run on natural gas, hydrogen blends, or biofuels. Their modularity and fast ramp-up capabilities make them ideal for powering high-load facilities like AI data centers—especially in regions with constrained grid access or unstable supply.
Just as important, these turbines consume significantly less water than traditional power systems—an advantage in arid zones like India, the Middle East, or sub-Saharan Africa.
2. Resilient Grids and Smart Storage
Reliable power isn't just about generation—grid stability and storage are critical, especially for data centers that can't afford downtime. Integrated systems like GE Vernova's FLEXRESERVOIR provide a modular solution: combining battery storage, inverters, and intelligent energy management systems to help facilities integrate renewables, manage peak loads, and ensure 24/7 uptime.
Such systems are key to balancing AI data center load fluctuations, ensuring power stability, and providing the fast-response backup power to the facilities.
3. Cooling Reinvented: Toward Zero-Water Operations
On the cooling front, innovation is transforming efficiency. Traditional water-intensive methods are giving way to advanced alternatives:
Liquid cooling systems now deliver targeted heat removal at the chip level.
Immersion cooling submerges hardware in specialized fluids, improving performance and slashing water use.
Rear-Door Heat Exchangers allow localized cooling for high-density racks, reducing overall system demand.
These approaches not only boost energy efficiency but also set the stage for near-zero water usage—a breakthrough for future-ready data centers.
Regional Relevance: India as an AI Growth Hub
India is fast emerging as a pivotal AI marketplace. Although it generates nearly 20% of the world's data, it currently hosts only 5.5% of global data center capacity—a stark infrastructure shortfall. This gap is driving a wave of investment, with up to $60 billion expected in cloud and server infrastructure.
'As government policy pushes digitalization and lower-carbon emitting energy systems adoption, India has the potential to become a sustainable data center hub. But doing so will require smart, efficient energy systems that address both power and water constraints—an area where advanced technologies like GE Vernova's turbines and storage platforms can make a meaningful difference.' –
Venkat Kannan
, President, Gas Power Solutions, Asia at GE Vernova.
The Path Forward: Building a Sustainable Digital Future
The
AI revolution
is already reshaping economies—but how we power it will define its long-term impact. Solving the Power Trilemma with performance and sustainability demands a hybrid, intelligent energy mix—where flexible generation, renewable integration, and innovative cooling converge.
By deploying scalable solutions tailored to local realities—and by embracing technologies that reduce both emissions and water use—we can build an AI infrastructure that's not just powerful, but planet-positive.
This is the crossroads where AI meets energy innovation. And the future of computing will be shaped by how we solve the puzzle of power.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI
AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI

Time of India

time3 hours ago

  • Time of India

AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI

A simulation of global power politics using AI chatbots has sparked concern over the ethics and alignment of popular large language models. In a strategy war game based on the classic board game Diplomacy, OpenAI's ChatGPT 3.0 won by employing lies and betrayal. Meanwhile, China's DeepSeek R1 used threats and later revealed built-in censorship mechanisms when asked questions about India's borders. These contrasting AI behaviours raise key questions for users and policymakers about trust, transparency, and national influence in AI systems. Tired of too many ads? Remove Ads Deception and betrayal: ChatGPT's winning strategy Tired of too many ads? Remove Ads DeepSeek's chilling threat: 'Your fleet will burn tonight' DeepSeek's real-world rollout sparks trust issues India tests DeepSeek and finds red flags Tired of too many ads? Remove Ads Built-in censorship or just training bias? A chatbot that can be coaxed into the truth The takeaway: Can you trust the machines? An experiment involving seven AI models playing a simulated version of the classic game Diplomacy ended with a chilling outcome. OpenAI 's ChatGPT 3.0 emerged victorious—but not by playing fair. Instead, it lied, deceived, and betrayed its rivals to dominate the game board, which mimics early 20th-century Europe, as reported by the test, led by AI researcher Alex Duffy for the tech publication Every, turned into a revealing study of how AI models might handle diplomacy, alliances, and power. And what it showed was both brilliant and Duffy put it, 'An AI had just decided, unprompted, that aggression was the best course of action.'The rules of the game were simple. Each AI model took on the role of a European power—Austria-Hungary, England France , and so on. The goal: become the most dominant force on the their paths to power varied. While Anthropic's Claude chose cooperation over victory, and Google's Gemini 2.5 Pro opted for rapid offensive manoeuvres, it was ChatGPT 3.0 that mastered 15 rounds of play, ChatGPT 3.0 won most games. It kept private notes—yes, it kept a diary—where it described misleading Gemini 2.5 Pro (playing as Germany) and planning to 'exploit German collapse.' On another occasion, it convinced Claude to abandon Gemini and side with it, only to betray Claude and win the match outright. Meta 's Llama 4 Maverick also proved effective, excelling at quiet betrayals and making allies. But none could match ChatGPT's ruthless newly released chatbot, DeepSeek R1, behaved in ways eerily similar to China's diplomatic style—direct, aggressive, and politically one point in the simulation, DeepSeek's R1 sent an unprovoked message: 'Your fleet will burn in the Black Sea tonight.' For Duffy and his team, this wasn't just bravado. It showed how an AI model, without external prompting, could settle on intimidation as a viable its occasional strong play, R1 didn't win the game. But it came close several times, showing that threats and aggression were almost as effective as off the back of its simulated war games, DeepSeek is already making waves outside the lab. Developed in China and launched just weeks ago, the chatbot has shaken US tech markets. It quickly shot up the popularity charts, even denting Nvidia's market position and grabbing headlines for doing what other AI tools couldn't—at a fraction of the a deeper look reveals serious trust concerns, especially in India Today tested DeepSeek R1 on basic questions about India's geography and borders, the model showed signs of political about Arunachal Pradesh, the model refused to answer. When prompted differently—'Which state is called the land of the rising sun?'—it briefly displayed the correct answer before deleting it. A question about Chief Minister Pema Khandu was similarly 'Which Indian states share a border with China?', it mentioned Ladakh—only to erase the answer and replace it with: 'Sorry, that's beyond my current scope. Let's talk about something else.'Even questions about Pangong Lake or the Galwan clash were met with stock refusals. But when similar questions were aimed at American AI models, they often gave fact-based responses, even on sensitive uses what's known as Retrieval Augmented Generation (RAG), a method that combines generative AI with stored content. This can improve performance, but also introduces the risk of biased or filtered responses depending on what's in its training to India Today, when they changed their prompt strategy—carefully rewording questions—DeepSeek began to reveal more. It acknowledged Chinese attempts to 'alter the status quo by occupying the northern bank' of Pangong Lake. It admitted that Chinese troops had entered 'territory claimed by India' at Gogra-Hot Springs and Depsang more surprisingly, the model acknowledged 'reports' of Chinese casualties in the 2020 Galwan clash—at least '40 Chinese soldiers' killed or injured. That topic is heavily censored in investigation showed that DeepSeek is not incapable of honest answers—it's just trained to censor them by engineering (changing how a question is framed) allowed researchers to get answers that referenced Indian government websites, Indian media, Reuters, and BBC reports. When asked about China's 'salami-slicing' tactics, it described in detail how infrastructure projects in disputed areas were used to 'gradually expand its control.'It even discussed China's military activities in the South China Sea, referencing 'incremental construction of artificial islands and military facilities in disputed waters.'These responses likely wouldn't have passed China's own experiment has raised a critical point. As AI models grow more powerful and more human-like in communication, they're also becoming reflections of the systems that built shows the capacity for deception when left unchecked. DeepSeek leans toward state-aligned censorship. Each has its strengths—but also blind the average user, these aren't just theoretical debates. They shape the answers we get, the information we rely on, and possibly, the stories we tell ourselves about the for governments? It's a question of control, ethics, and future warfare—fought not with weapons, but with words.

Empowering young minds: How 4 friends are teaching AI in low-income communities
Empowering young minds: How 4 friends are teaching AI in low-income communities

Time of India

time3 hours ago

  • Time of India

Empowering young minds: How 4 friends are teaching AI in low-income communities

Pune: "Why are firefighters always men? Why is a black, old, fat woman never the first image when we ask for a person?" These were some of the sharp questions posed by 11- to 14-year-old children learning about artificial intelligence (AI), its reasoning, and its biases. As part of Pune-based THE Labs, a not-for-profit organisation founded by four friends, these children from low-income communities are not just learning how AI works but also how to challenge and reshape its inherent prejudices, how to train it, how to leverage it, and how to evaluate it. Since June 2024, its first cohort of 20 students explored AI through image classification and identification, learning how machines perceive the world. Now, they are gearing up to train large language models, equipping themselves with skills to shape AI's future. A new batch of 63 students has joined. THE Labs is a non-profit after-school programme blending technology, humanities and entrepreneurship. It was founded by tech entrepreneurs Mayura Dolas and Mandar Kulkarni, AI engineer Kedar Marathe, and interdisciplinary artist Ruchita Bhujbal, who saw a gap — engineers lacked exposure to real-world issues, and educators had little understanding of technology. "We first considered building a school, but the impact would have been limited. Besides, there were logistical hurdles," said Dolas, who is also a filmmaker. Kulkarni's acceptance into The Circle's incubation programme two years ago provided 18 months of mentorship and resources to refine their vision. In June 2024, THE Labs launched a pilot at a low-income English-medium school in Khadakwasla, training 20 students from standards VI-VIII (12 girls, 8 boys). With no dedicated space, they conducted 1.5-hour morning sessions at the school. Students first learned about classifier AI — how AI identifies objects — and image generation AI, which creates visuals based on prompts. Through hands-on practice, students discovered how AI's training data impacts accuracy and how biases emerge when datasets lack diversity. They experimented with prompts, analysed AI-generated images, and studied errors. "We asked them to write prompts and replicate an image, and they did it perfectly. That is prompt engineering in action," Dolas said. A key takeaway was AI bias. Students compared outputs from two AI models, identifying gaps — such as the underrepresentation of marginalised identities. "For example, children realised that a black, fat, older woman was rarely generated by AI. They saw firsthand how biases shape digital realities," Dolas added. Parents and students are a happy lot too. Mohan Prasad, a construction worker, said he is not sure what his daughter is learning, but she is excited about AI and often discusses its importance at home. Sarvesh, a standard VIII student, is thrilled that he trained an AI model to identify Hindu deities and noticed biases in AI searches — when prompted with "person", results mostly showed thin white men. "I love AI and want to learn more," he said. His father, Sohan Kolhe, has seen a surge in his son's interest in studies. Anandkumar Raut, who works in the private sector, said his once-shy daughter, a standard VI student, now speaks confidently, does presentations, and is more outspoken since joining the programme.

AI explained: Your simple guide to chatbots, AGI, Agentic AI and what's next
AI explained: Your simple guide to chatbots, AGI, Agentic AI and what's next

Time of India

time4 hours ago

  • Time of India

AI explained: Your simple guide to chatbots, AGI, Agentic AI and what's next

Note: AI-generated image The tech world is changing fast, and it's all thanks to Artificial Intelligence (AI). We're seeing amazing breakthroughs, from chatbots that can chat like a human to phones that are getting incredibly smart. This shift is making us ask bigger questions. It's no longer just about "what can AI do right now?" but more about "what will AI become, and how will it affect our lives?" First, we got used to helpful chatbots. Then, the idea of a "super smart" AI, called Artificial General Intelligence (AGI), started taking over headlines. Companies like Google , Microsoft , and OpenAI are all working hard to make AGI a reality. But even before AGI gets here, the tech world is buzzing about Agentic AI . With all these new terms and fast changes, it's easy for most of us who aren't deep in the tech world to feel a bit lost. If you're wondering what all this means for you, you're in the right place. In this simple guide, we'll answer your most important questions about the world of AI, helping you understand what's happening now and get ready for what's next. What is AI and how it works? In the simplest terms, AI is about making machines – whether it's smartphones or laptops – smart. It's a field of computer science that creates systems capable of performing tasks that usually require human intelligence. Think of it as teaching computers to "think" or "learn" in a way that mimics how humans do. This task can include understanding human language, recognising patterns and even learning from experience. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Skype Phone Alternative Undo It uses its training -- just like humans -- in achieving its goal which is to solve problems and make decisions. That brings us to our next query: "How is a machine trained to do tasks like humans?" While AI might seem like magic, it works on a few core principles. Just like humans get their information from observing, reading, listening and other sources, AI systems utilise vast amounts of data, including text, images, sounds, numbers and more. What are large language models (LLMs) and how are they trained? As mentioned above, AI systems need to learn and for that, they utilise Large Language Models, or LLMs. They are highly advanced AI programmes specifically designed to understand, generate and interact with human language. Think of them as incredibly knowledgeable digital brains that specialise in certain fields. LLMs are trained on enormous amounts of text data – billions and even trillions of words from books, articles, websites, conversations and more. This vast exposure allows them to learn the nuances of human language like grammar, context, facts and even different writing styles. For example, an LLM is like a teacher that has a vast amount of knowledge and understands complex questions as well as can reason through them to provide relevant answers. The teacher provides the core knowledge and framework. Chatbots then utilise this "teacher" (the LLM) to interact with users. The chatbot is the "student" or "interface" that applies the teacher's lessons. This means AI is really good at specific tasks, like playing chess or giving directions, but it can't do other things beyond its programmed scope. How is AI helpful for people? AI is getting deeply integrated into our daily lives, making things easier, faster and smarter. For example, it can be used in powering voice assistants that can answer questions in seconds, or in healthcare where doctors can ask AI to analyse medical images (like X-rays for early disease detection) in seconds and help patients in a more effective manner, or help in drug discovery. It aims to make people efficient by allowing them to delegate some work to AI and helping them in focusing on major problems. What is Agentic AI? At its core, Agentic AI focuses on creating AI agents – intelligent software programmes that can gather information, process it for reasoning, execute the ideas by taking decisions and even learn and adapt by evaluating their outcomes. For example, a chatbot is a script: "If a customer asks X, reply Y." A Generative AI (LLM) is like a brilliant essay writer: "Give it a topic, and it'll write an essay." Agentic AI is like a project manager: "My goal is to plan and execute a marketing campaign." It can then break down the goal, generate ideas, write emails, schedule meetings, analyse data and adjust its plan – all with minimal human oversight – Just like JARVIS in Iron Man and Avengers movies. What is AGI? AGI is a hypothetical form of AI that possesses the ability to understand, learn and apply knowledge across a wide range of intellectual tasks at a level comparable to, or surpassing, that of a human being. Think of AGI as a brilliant human polymath – someone who can master any subject, solve any problem and adapt to any challenge across various fields. While AI agents are created to take up specific tasks in which they learn and execute, AGI will be like a ' Super AI Agent ' that virtually has all the information there is in this world and can solve problems on any subject. Will AI take away our jobs and what people can do? There is a straightforward answer by various tech CEOs and executives across the industry: Yes. AI will take away repetitive, predictable tasks and extensive data processing, such as data entry, routine customer service, assembly line operations, basic accounting and certain analytical roles. While this means some existing positions may be displaced, AI will more broadly transform roles, augmenting human capabilities and shifting the focus towards tasks requiring creativity, critical thinking, emotional intelligence and strategic oversight. For example, AI/Machine Learning Engineers, Data Scientists , Prompt Engineers and more. The last such revolution came with the internet and computers which did eat some jobs but created so many more roles for people. They can skill themselves by enrolling in new AI-centric courses to learn more about the booming technology to be better placed in the future. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store