logo
Redrawing the not-so-pretty energy footprint of AI

Redrawing the not-so-pretty energy footprint of AI

The Hindu05-05-2025

Generative Artificial Intelligence (AI) has undoubtedly eased access to art and reduced the time and the effort required to complete certain tasks. For example, ChatGPT-4o can generate a Studio Ghibli-inspired portrait in seconds with just a prompt. But this ease comes at a significant energy cost that is often overlooked — one that has even led to Graphic Processing Units (GPUs) melting. As AI tools advance, this environmental impact will continue to become more detrimental, making this an unsustainable technology. How can AI be developed sustainably? And can leveraging nuclear energy, specifically Small Modular Reactors (SMR), be a possible alternative?
AI is not free. Every time one uses ChatGPT or any other AI tool, somewhere in the world, there is a data centre chugging electricity, much of which is generated from fossil fuels. 'It's super fun seeing people love images in ChatGPT, but our GPUs are melting,' tweeted Sam Altman, CEO of OpenAI. Projections indicate that these data centres could account for 10% of the world's total electricity usage by 2030. Though these estimates mirror worldwide energy trends, it is necessary to highlight that India currently has sufficient capacity to generate electricity for its own domestic AI needs. Yet, with increasing adoption and ambitions, proactive planning is imperative.
Training an AI model, whether it is a conversational tool such as ChatGPT or an image-generator tool such as Midjourney, can generate the same amount of CO2 as five cars running continuously across their life. Once deployed, AI tools continue to draw immense power from data centres as they serve countless users around the globe. This resource consumption is staggering, and it is becoming more unsustainable as AI adoption grows.
To start with, AI companies need to be transparent about their energy consumption. Just as some regulations mandate the disclosure of privacy practices surrounding data usage, companies must also be mandated to disclose their environmental impact — first, how much energy is being consumed? Second, where is it coming from? Third, what steps are being taken to minimise energy consumption? Such data would provide further insights on where energy is being used the most and encourage research and development to create a more sustainable model of AI development.
Advantages of SMRs
Another, perhaps controversial, solution would be to address the energy source behind all of this technological growth. It is time nuclear energy, particularly SMRs, is discussed seriously. While this is often a subject of heated debate, it is also a powerful potential solution to the energy demands created by AI and other emerging technologies. The AI boom is happening fast, and the current energy infrastructure will just not be able to keep up.
SMRs present a transformative opportunity for the global energy landscape to support booming AI and data infrastructure. Unlike traditional large-scale nuclear power plants that demand extensive land, water, and infrastructure, SMRs are designed to be compact and scalable. This flexibility allows them to be deployed closer to high-energy-demand facilities, such as data centres, which require consistent and reliable power to manage vast amounts of computational workloads. Their ability to provide 24X7, zero-carbon, baseload electricity makes them an ideal alternative to renewable sources such as solar and wind by ensuring a stable energy supply regardless of weather conditions.
The benefits of SMRs extend beyond just energy reliability. Their modular construction reduces construction time and costs when compared to conventional nuclear plants, enabling faster deployment to meet the rapidly growing demands of AI and data-driven industries. Additionally, SMRs offer enhanced safety features, with passive safety systems that rely on natural phenomena to cool the reactor core and safely shut down, reducing the risk of accidents. This makes them more acceptable and easier to integrate into regions where large-scale nuclear facilities would face opposition. The ability of SMR to operate in diverse environments, from urban areas to remote locations, also supports the decentralisation of energy production, reducing transmission losses and enhancing grid resilience.
Some of the challenges
However, the adoption of SMRs is not without challenges. Significant policy shifts will be required to create a robust regulatory framework that addresses safety, waste management and public perception. There is also the matter of substantial upfront investment, as the technology is still maturing and may face issues of cost competitiveness when compared to established energy sources. Additionally, coordinating SMR deployment with existing renewable energy initiatives will require careful planning to maximise synergies while minimising redundancy. In India's case, despite these challenges, the cost of electricity from SMRs is predicted to fall from ₹10.3 to ₹5 per kWh after the reactors are functional, which is less than the average cost of electricity.
In conclusion, a public-private partnership model presents a realistic solution to the challenges of sustainable AI development. By leveraging the strengths of both sectors, this model can facilitate the efficient development of SMRs alongside other forms of renewable energy to support advancements in AI.
Anwesha Sen is with The Takshashila Institution. Sourav Mannaraprayil is with The Takshashila Institution

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google offers buyouts to workers; OpenAI in talks with Saudi Arabia, Indian investors to raise funds; Meta, TikTok challenge fees in EU court
Google offers buyouts to workers; OpenAI in talks with Saudi Arabia, Indian investors to raise funds; Meta, TikTok challenge fees in EU court

The Hindu

time22 minutes ago

  • The Hindu

Google offers buyouts to workers; OpenAI in talks with Saudi Arabia, Indian investors to raise funds; Meta, TikTok challenge fees in EU court

Google offers buyouts to workers Google is offering buyouts to more employees in another round of layoffs as the tech giant awaits a court verdict in its search and ads antitrust lawsuit. The Sundar Pichai-led company has confirmed the news although it hasn't disclosed the number of employees who will be affected. The employees are spread across the search, advertising, research and engineering segments. U.S. District Judge Amit Mehta is deciding whether to ban Google's exclusive agreement with Apple which sets Chrome as its search browser. The judgment is expected to be given before Labour Day after which Google can appeal last year's decision where the company was defined as a monopoly. The federal judge ruled that Google's digital ads business was said to abuse its market power and tamp down on competition. In 2023, the company laid off 12,000 workers post a hiring splurge that Big Tech companies were on during the pandemic. OpenAI in talks with Saudi Arabia, Indian investors to raise funds OpenAI is reportedly in discussions with new investors including Saudi's PIF, India's Reliance Industries and existing shareholder United Arab Emirates' MGX to raise $40 billion in funds. The investors are expected to invest at least hundreds of millions of dollars each, a report by 'The Information' said. The funds will be used to develop AI infrastructure as a part of their Stargate venture which is in partnership with SoftBank. OpenAI CEO Sam Altman met with Indian IT Minister earlier this year, after which he also planned to meet the Abu Dhabi investment group MGX. The AI firm is also speaking with Coatue and Founders Fund to raise atleast $100 million each. The report also stated that OpenAI could be raising an additional $17 billion in 2027. OpenAI hasn't confirmed the reports yet. Meta, TikTok challenge fees in EU court Meta and TikTok are fighting EU regulators on a supervisory fee and that's levied and how it's calculated, in the second highest court in Europe. The social media platforms called the fee disproportionate and measured on a flawed methodology. The companies along with 16 other tech companies had to pay a supervisory fee which was 0.05% of their annual worldwide net income under the Digital Services Act that became a law in 2022. The fee intends to cover the EU's cost of monitoring their compliance with the law. The size of the annual fee is based on the number of average monthly active users for each platform. Meta said that it wasn't trying to avoid its share of the fee but it was questioning the way the amount had been calculated. A representative for TikTok said that the EU had counted users who logged into their phones and from their laptops twice.

In a first, UN report calls for urgent action on AGI: What does it mean?
In a first, UN report calls for urgent action on AGI: What does it mean?

Indian Express

time43 minutes ago

  • Indian Express

In a first, UN report calls for urgent action on AGI: What does it mean?

The United Nations Council of Presidents of the General Assembly (UNCPGA) recently released a report seeking immediate global coordination in the wake of the emergence of artificial general intelligence (AGI). At present, leading tech giants OpenAI, Google, and Meta are working towards achieving AGI, which are essentially systems that are capable of equalling or surpassing human intelligence in various cognitive tasks. OpenAI's Sam Altman has been saying AGI may be on the horizon although these human intelligence-level systems aren't here yet. Similarly, Google DeepMind, as a step towards AGI, is reportedly working towards creating a 'world-modelling' team to simulate environments. Meta, on the other hand, is reportedly investing $15 billion into Scale AI and a 50-member squad to accelerate its AGI vision. Anthropic is working on safe and adaptable models, and there are speculations that it will reach AGI in the next two to three years. So far, no company has demonstrated true AGI, and most of these are scaling up multimodal and reinforcement-learning systems. Even though AGI seems to be a work in progress, the UNCPGA report suggests that with massive financial investments in history and unprecedented R&D efforts, AGI could emerge within this decade. It could lead to some extraordinary benefits, such as the acceleration of scientific discoveries related to public health, transformation of industries and increased productivity, and even contribute to the realisation of sustainable development goals. The report also acknowledges that AGI could lead to unique and potentially 'catastrophic risks'. AGI could execute harmful actions beyond human oversight, resulting in irreversible impacts. 'Without proactive global management, competition among nations and corporations will accelerate risky AGI development, undermine security protocols, and exacerbate geopolitical tensions. Coordinated international action can prevent these outcomes, promoting secure AGI development and usage, equitable distribution of benefits, and global stability,' reads the report. The UNCPGA report recommends immediate and coordinated international action supported by the United Nations to effectively address the global challenges that could arise owing to AGI. The report underscores six major risks, including loss of human control over AGI; AGI-enabled weapons of mass destruction; cybersecurity vulnerabilities; economic instability; autonomous AGI with existential risks; and missed opportunities to solve global challenges. On the other hand, the UN panel also listed its recommendations to mitigate these risks, which include bold and coordinated steps, including a dedicated UN General Assembly session on AGI; a global AGI Observatory to track the development and risks; a certification system for secure and trustworthy AGI; A proposed UN framework convention on AGI governance; and exploring a dedicated UN agency for global coordination.

Who is Alexandr Wang, and why is Meta betting billions on his startup Scale AI?
Who is Alexandr Wang, and why is Meta betting billions on his startup Scale AI?

Indian Express

timean hour ago

  • Indian Express

Who is Alexandr Wang, and why is Meta betting billions on his startup Scale AI?

Alexandr Wang is the CEO and co-founder of Scale AI, a data-labelling startup that helps other companies train and deploy cutting-edge AI models. Over the years, Wang has built his startup into the backbone of the AI boom, quietly enabling everything from autonomous vehicles to large language models (LLMs). Now, Wang finds himself at the centre of a potential $15 billion shake-up as Meta taps him to lead its newly formed research lab that will focus on building AI systems capable of 'superintelligence'. The $15 billion investment deal is also expected to bring other Scale AI employees to Meta, which is also reportedly offering seven to nine-figure compensation packages to AI researchers from OpenAI and Google who would like to be a part of its new 50-member artificial superintelligence lab. The new lab comes at a crucial time for Meta, which is perceived to be struggling to pull ahead of its competitors Google, Microsoft, and OpenAI in the high-stakes AI race. CEO Mark Zuckerberg has pushed for AI to be incorporated across the company's products such as its Ray Ban smart glasses as well as social media platforms Facebook, Instagram, and WhatsApp. Meta has also sought to define its competitive edge by developing open AI models, allowing developers to freely download and integrate the source code into their own tools. But internal issues such as employee turnover and underwhelming product launches have reportedly hampered Meta's AI efforts lately. So far, the company's research efforts have been overseen by its chief AI scientist, Turing Award winner Yann LeCun who is widely recognised for his groundbreaking research contributions on convolutional neural networks (CNNs). However, LeCun's views on AI are not aligned with others in Silicon Valley as he has argued that LLMs are not the path to artificial general intelligence (AGI). Now, Meta is betting on Wang to not only help it regain the lead in the AI race but also push toward another frontier known as artificial superintelligence (ASI) — a hypothetical AI system with intelligence exceeding that of the human brain. Alexandr Wang was born in New Mexico, US, to Chinese immigrant parents who worked at Los Alamos National Laboratory as nuclear physicists. Before heading to college, Wang reportedly worked at internet startup Quora. He dropped out of Massachusetts Institute of Technology (MIT) after just one year and joined Y Combinator, the popular startup accelerator that used to be led by OpenAI CEO Sam Altman. At Y Combinator, he teamed up with Quora alum Lucy Guo to start a new company called Scale AI in 2016. Two years later, both Wang and Guo were named in Forbes' 30 Under 30 list in enterprise technology. Guo shortly exited Scale AI 'due to differences in product vision and road map,' according to a report by Forbes. Wang continued running the startup which was minted as a unicorn in 2019 after raising $100 million in investment from Peter Thiel's Founders Fund followed by another $580 million fundraising round which put the company at a $7 billion valuation. At 24, Wang became the youngest self-made billionaire in the world. His co-founder, Lucy Guo, recently became the youngest self-made woman billionaire due to her stake in Scale AI. Wang was reportedly Sam Altman's roommate during the COVID-19 pandemic. The two AI industry leaders were also photographed sitting next to each other at US President Donald Trump's swearing-in ceremony in January this year. Scale AI was founded in 2016 as a startup that labelled mass quantities of data required to train AI systems, particularly autonomous vehicles (AV). As a result, most of its data services were primarily offered to self-driving automakers. This move to corner the market for supplying training data so that self-driving cars could tell the difference between various objects, is what made Scale AI well-positioned in the AI boom that was soon to follow. LLMs are trained on massive amounts of data to generate text and other content. Scale AI hires thousands of contract workers to sift through vast amounts of data, label the information, and clean the datasets that are then supplied to tech companies to train their complex AI models. Scale AI's client list includes major automakers such as Toyota and Honda as well as Waymo, Google's AV subsidiary. It has also partnered with Accenture to help the consulting giant build custom AI apps and models. OpenAI, Microsoft, and Toronto-based AI startup Cohere also count among Scale AI's customers, according to a report by Forbes. The US government has also reportedly sought Scale AI's data labelling and annotation services in order to help analyse satellite imagery in Ukraine. Last valued at nearly $14 billion, the company saw about $870 million in revenue in 2024. It further expects to more than double revenue this year to $2 billion, which would put Scale AI's valuation at $25 billion, according to a report by Bloomberg. However, the AI boom has also given rise to a wave of relatively new competitors such as Surge AI, which offers data labeling tools to AI companies, as well as data labeling startups Labelbox and Snorkel AI, which primarily cater to non-tech enterprises.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store