logo
How Much Energy Does AI Use? The People Who Know Aren't Saying

How Much Energy Does AI Use? The People Who Know Aren't Saying

WIRED19-06-2025
Jun 19, 2025 6:00 AM A growing body of research attempts to put a number on energy use and AI—even as the companies behind the most popular models keep their carbon emissions a secret. Photograph: Bloomberg/Getty Images
'People are often curious about how much energy a ChatGPT query uses,' Sam Altman, the CEO of OpenAI, wrote in an aside in a long blog post last week. The average query, Altman wrote, uses 0.34 watt-hours of energy: 'About what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes.'
For a company with 800 million weekly active users (and growing), the question of how much energy all these searches are using is becoming an increasingly pressing one. But experts say Altman's figure doesn't mean much without much more public context from OpenAI about how it arrived at this calculation—including the definition of what an 'average' query is, whether or not it includes image generation, and whether or not Altman is including additional energy use, like from training AI models and cooling OpenAI's servers.
As a result, Sasha Luccioni, the climate lead at AI company Hugging Face, doesn't put too much stock in Altman's number. 'He could have pulled that out of his ass,' she says. (OpenAI did not respond to a request for more information about how it arrived at this number.)
As AI takes over our lives, it's also promising to transform our energy systems, supercharging carbon emissions right as we're trying to fight climate change. Now, a new and growing body of research is attempting to put hard numbers on just how much carbon we're actually emitting with all of our AI use.
This effort is complicated by the fact that major players like OpenAi disclose little environmental information. An analysis submitted for peer review this week by Luccioni and three other authors looks at the need for more environmental transparency in AI models. In Luccioni's new analysis, she and her colleagues use data from OpenRouter, a leaderboard of large language model (LLM) traffic, to find that 84 percent of LLM use in May 2025 was for models with zero environmental disclosure. That means that consumers are overwhelmingly choosing models with completely unknown environmental impacts.
'It blows my mind that you can buy a car and know how many miles per gallon it consumes, yet we use all these AI tools every day and we have absolutely no efficiency metrics, emissions factors, nothing,' Luccioni says. 'It's not mandated, it's not regulatory. Given where we are with the climate crisis, it should be top of the agenda for regulators everywhere.'
As a result of this lack of transparency, Luccioni says, the public is being exposed to estimates that make no sense but which are taken as gospel. You may have heard, for instance, that the average ChatGPT request takes 10 times as much energy as the average Google search. Luccioni and her colleagues track down this claim to a public remark that John Hennessy, the chairman of Alphabet, the parent company of Google, made in 2023.
A claim made by a board member from one company (Google) about the product of another company to which he has no relation (OpenAI) is tenuous at best—yet, Luccioni's analysis finds, this figure has been repeated again and again in press and policy reports. (As I was writing this piece, I got a pitch with this exact statistic.)
'People have taken an off-the-cuff remark and turned it into an actual statistic that's informing policy and the way people look at these things,' Luccioni says. 'The real core issue is that we have no numbers. So even the back-of-the-napkin calculations that people can find, they tend to take them as the gold standard, but that's not the case.'
One way to try and take a peek behind the curtain for more accurate information is to work with open source models. Some tech giants, including OpenAI and Anthropic, keep their models proprietary—meaning outside researchers can't independently verify their energy use. But other companies make some parts of their models publicly available, allowing researchers to more accurately gauge their emissions.
A study published Thursday in the journal Frontiers of Communication evaluated 14 open-source large language models, including two Meta Llama models and three DeepSeek models, and found that some used as much as 50 percent more energy than other models in the dataset responding to prompts from the researchers. The 1,000 benchmark prompts submitted to the LLMs included questions on topics such as high school history and philosophy; half of the questions were formatted as multiple choice, with only one-word answers available, while half were submitted as open prompts, allowing for a freer format and longer answers. Reasoning models, the researchers found, generated far more thinking tokens—measures of internal reasoning generated in the model while producing its answer, which are a hallmark of more energy use—than more concise models. These models, perhaps unsurprisingly, were also more accurate with complex topics. (They also had trouble with brevity: During the multiple choice phase, for instance, the more complex models would often return answers with multiple tokens, despite explicit instructions to only answer from the range of options provided.)
Maximilian Dauner, a PhD student at the Munich University of Applied Sciences and the study's lead author, says he hopes AI use will evolve to think about how to more efficiently use less-energy-intensive models for different queries. He envisions a process where smaller, simpler questions are automatically directed to less-energy-intensive models that will still provide accurate answers. 'Even smaller models can achieve really good results on simpler tasks, and don't have that huge amount of CO 2 emitted during the process,' he says.
Some tech companies already do this. Google and Microsoft have previously told WIRED that their search features use smaller models when possible, which can also mean faster responses for users. But generally, model providers have done little to nudge users toward using less energy. How quickly a model answers a question, for instance, has a big impact on its energy use—but that's not explained when AI products are presented to users, says Noman Bashir, the Computing & Climate Impact Fellow at MIT's Climate and Sustainability Consortium.
'The goal is to provide all of this inference the quickest way possible so that you don't leave their platform,' he says. 'If ChatGPT suddenly starts giving you a response after five minutes, you will go to some other tool that is giving you an immediate response.'
However, there's a myriad of other considerations to take into account when calculating the energy use of complex AI queries, because it's not just theoretical—the conditions under which queries are actually run out in the real world matter. Bashir points out that physical hardware makes a difference when calculating emissions. Dauner ran his experiments on an Nvidia A100 GPU, but Nvidia's H100 GPU—which was specially designed for AI workloads, and which, according to the company, is becoming increasingly popular—is much more energy-intensive.
Physical infrastructure also makes a difference when talking about emissions. Large data centers need cooling systems, light, and networking equipment, which all add on more energy; they often run in diurnal cycles, taking a break at night when queries are lower. They are also hooked up to different types of grids—ones overwhelmingly powered by fossil fuels, versus those powered by renewables—depending on their locations.
Bashir compares studies that look at emissions from AI queries without factoring in data center needs to lifting up a car, hitting the gas, and counting revolutions of a wheel as a way of doing a fuel-efficiency test. 'You're not taking into account the fact that this wheel has to carry the car and the passenger,' he says.
Perhaps most crucially for our understanding of AI's emissions, open source models like the ones Dauner used in his study represent a fraction of the AI models used by consumers today. Training a model and updating deployed models takes a massive amount of energy—figures that many big companies keep secret. It's unclear, for example, whether the light bulb statistic about ChatGPT from OpenAI's Altman takes into account all the energy used to train the models powering the chatbot. Without more disclosure, the public is simply missing much of the information needed to start understanding just how much this technology is impacting the planet.
'If I had a magic wand, I would make it mandatory for any company putting an AI system into production, anywhere, around the world, in any application, to disclose carbon numbers,' Luccioni says.
Paresh Dave contributed reporting.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI's Greg Brockman says it's not too late to build AI startups
OpenAI's Greg Brockman says it's not too late to build AI startups

Business Insider

time22 minutes ago

  • Business Insider

OpenAI's Greg Brockman says it's not too late to build AI startups

If you're dreaming of joining the AI startup race, it might not be too late to start. "Sometimes it might feel like all the ideas are taken, but the economy is so big," Greg Brockman, OpenAI's cofounder and president, said in an episode of the "Latent Space" podcast released on Saturday. "It is worthwhile and really important for people to really think about how do we get the most out of these amazing intelligences that we've created." Brockman said startups that connect large language models to real-world applications are extremely valuable. Brockman, who cofounded OpenAI in 2015, added that domains like healthcare require founders to think about all the stakeholders and how they can insert AI models into the existing system. "There is so much fruit that is not yet picked, so go ahead and ride the GPT river," he said. Brockman also advised founders against building "better wrappers.""AI wrapper" is a dismissive term used to refer to simple applications that are built on top of existing AI models and can be easily offered by LLM companies themselves. "It's really about understanding a domain and building up expertise and relationships and all of those things," Brockman said. Brockman's comments are part of a Silicon Valley debate about how new AI founders can future-proof their startup ideas. Last year, OpenAI CEO Sam Altman said his company would "steamroll" any startup building "little things" on top of its model. He said that companies that underestimate the speed of AI model growth risk becoming part of the "OpenAI killed my startup meme." In a June podcast, Instagram cofounder and Anthropic's chief product officer, Mike Krieger, offered some advice for startups that want to avoid being made obsolete by LLM companies. Startups with deep knowledge in areas like law or biotechnology and those with good customer relationships can survive AI giants, Krieger said. He also suggested that startups play with new AI interfaces that feel "very weird" at first. "I don't envy them," he added, about founders wanting to build in the AI space. "Maybe that's part of the reason why I wanted to join a company rather than start one."

What Canadian National Railway (TSX:CNR)'s Reduced Earnings Guidance Means For Shareholders
What Canadian National Railway (TSX:CNR)'s Reduced Earnings Guidance Means For Shareholders

Yahoo

timean hour ago

  • Yahoo

What Canadian National Railway (TSX:CNR)'s Reduced Earnings Guidance Means For Shareholders

Earlier this month, Canadian National Railway faced ongoing operational challenges and reported that its expected business turnaround had not materialized, leading to reduced earnings growth guidance. This setback has sparked negative sentiment, with some industry observers drawing comparisons to competitors viewed as better positioned for growth and highlighting alternatives within the sector. With reduced earnings growth guidance in focus, we'll explore how these operational issues may shift Canadian National Railway's investment narrative. AI is about to change healthcare. These 27 stocks are working on everything from early diagnostics to drug discovery. The best part - they are all under $10b in market cap - there's still time to get in early. Canadian National Railway Investment Narrative Recap To be a shareholder in Canadian National Railway, you need confidence in the company's ability to capitalize on long-term shifts toward intermodal and bulk transport as industries focus on supply chain resilience and North American trade. However, the recent reduction in earnings growth guidance directly impacts the key short-term catalyst, an expected business turnaround, and highlights concerns around volume growth, currently the most prominent risk for the business. The news meaningfully shifts investor attention from network advantages to near-term operational execution. Among recent announcements, the second-quarter results stand out. While CN grew net income and earnings per share compared to the prior year, sales have edged lower, reflecting the ongoing volume and demand pressures underpinning the news event. This reinforces the theme that even targeted capital spending and cost discipline will take time to translate into renewed growth, particularly if market conditions stay challenging. In contrast, investors should also be aware that structural shifts in customer routing and industry competitiveness could... Read the full narrative on Canadian National Railway (it's free!) Canadian National Railway's narrative projects CA$19.6 billion revenue and CA$5.6 billion earnings by 2028. This requires 4.6% yearly revenue growth and a CA$1.0 billion earnings increase from CA$4.6 billion today. Uncover how Canadian National Railway's forecasts yield a CA$153.14 fair value, a 19% upside to its current price. Exploring Other Perspectives Simply Wall St Community members provided 12 fair value estimates for CNR ranging from CA$119.59 to CA$170.64 per share. These divergent opinions come as growth in shipping volumes remains a critical uncertainty for the company's ability to regain momentum, explore the range of forecasts and viewpoints from across the Community. Explore 12 other fair value estimates on Canadian National Railway - why the stock might be worth 7% less than the current price! Build Your Own Canadian National Railway Narrative Disagree with existing narratives? Create your own in under 3 minutes - extraordinary investment returns rarely come from following the herd. A great starting point for your Canadian National Railway research is our analysis highlighting 4 key rewards and 1 important warning sign that could impact your investment decision. Our free Canadian National Railway research report provides a comprehensive fundamental analysis summarized in a single visual - the Snowflake - making it easy to evaluate Canadian National Railway's overall financial health at a glance. Want Some Alternatives? Right now could be the best entry point. These picks are fresh from our daily scans. Don't delay: Outshine the giants: these 19 early-stage AI stocks could fund your retirement. The latest GPUs need a type of rare earth metal called Dysprosium and there are only 28 companies in the world exploring or producing it. Find the list for free. Trump's oil boom is here - pipelines are primed to profit. Discover the 22 US stocks riding the wave. This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned. Companies discussed in this article include Have feedback on this article? Concerned about the content? with us directly. Alternatively, email editorial-team@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

7 Terrifying AI Risks That Could Change The World
7 Terrifying AI Risks That Could Change The World

Forbes

timean hour ago

  • Forbes

7 Terrifying AI Risks That Could Change The World

There's no doubt about it, AI can be scary. Anyone who says they aren't at least a little bit worried is probably very brave, very stupid, or a liar. It makes total sense because the unknown is always frightening, and when it comes to AI, there are a lot of unknowns. How exactly does it work? Why can't we explain certain phenomena like hallucinations? And perhaps most importantly, what impact is it going to have on our lives and society? Many of these fears have solidified into debates around particular aspects of AI—its impact on human jobs, creativity or intellectual property rights, for example. And those involved often make it clear that the potential implications are terrifying. So here I will overview what I have come to see as some of the biggest fears. These are potential outcomes of the AI revolution that no one wants to see, but we can't be sure they aren't lurking around the corner… 1. Impact On Jobs One of the most pressing fears, and perhaps the one that gets the most coverage, is that huge swathes of us will be made redundant by machines that are cheaper to run than human workers. Having robots do all the work for us sounds great, but in reality, most people need a job to earn a living. Some evangelize about a post-scarcity economy where robot labor creates an abundance of everything we need, but this is highly theoretical. What's real is that workers in fields as diverse as software engineering, voice acting and graphic design are already reportedly being replaced. Fueling this fear is that while international bodies and watchdogs like the WEF have issued warnings about the potential threat, governments have been slow to come out with plans for a centralized, coordinated response. 2. Environmental Harm Operating generative AI language models requires huge amounts of compute power. This is provided by vast data centers that burn through energy at rates comparable to small nations, creating poisonous emissions and noise pollution. They consume massive amounts of water at a time when water scarcity is increasingly a concern. Critics of the idea that the benefits of AI are outweighed by the environmental harm it causes often believe that this damage will be offset by efficiencies that AI will create. But again, a lot of these advances are currently theoretical, while the environmental impact of AI is happening today. 3. Surveillance The threat that AI poses to privacy is at the root of this one. With its ability to capture and process vast quantities of personal information, there's no way to predict how much it might know about our lives in just a few short years. Employers increasingly monitoring and analyzing worker activity, the growing number of AI-enabled cameras on our devices, and in our streets, vehicles and homes, and police forces rolling out facial-recognition technology, all raise anxiety that soon no corner will be safe from prying AIs. 4. Weaponization Another common and entirely rational fear is that AI will be used to create weapons unlike anything seen before outside of science fiction. Robot dogs have been deployed in the Ukraine war for reconnaissance and logistics, and autonomous machine guns are capable of targeting enemies on a battlefield and shooting when given human authorization. Lethal autonomous AI hasn't yet been deployed as far as we know, but the fear is that this is inevitably just a matter of time. From computer-vision-equipped hunter-killer drones to AI-powered cyber attacks capable of knocking out critical infrastructure across entire regions, the possibilities are chilling. 5. Intellectual Property Theft If you're an author, artist or other creative professional, you may be among the many who are frustrated by the fact that multinational technology companies can train their AIs on your work, without paying you a penny. This has sparked widespread protest and backlash, with artists and their unions arguing that tech companies are effectively monetizing their stolen IP. Legal debate and court cases are in progress, but with the likes of OpenAI and Google throwing huge resources into their missions for more and more training data, there are legitimate fears that the rights of human creators might be overlooked. 6. Misinformation AI enables and accelerates the spread of misinformation, making it quicker and easier to disseminate, more convincing, and harder to detect from Deepfake videos of world leaders saying or doing things that never happened, to conspiracy theories flooding social media in the form of stories and images designed to go viral and cause disruption. The aim is often to destabilize, and this is done by undermining trust in democratic institutions, scientific consensus or fact-based journalism. One very scary factor is that the algorithmic nature of AI reinforces views by serving up content that individuals are likely to agree with. This can result in them becoming trapped in 'echo-chambers' and pushed towards fringe or extremist beliefs. 7. AI Will Hurt Us Right back to Mary Shelley's Frankenstein, via Space Odyssey, Terminator and The Matrix, cautionary tales have warned us of the potential dangers of giving our creations the power of thought. Right now, that gulf between fiction and reality still seems uncrossable; it's hard to comprehend how we would go from ChatGPT to machines intent or even capable of maliciously harming us. But the threat of 'runaway AI', where AI begins developing and evolving by itself in ways that might not be aligned with our best interests, is treated very seriously. Many leading AI researchers and alliances have spoken openly about the need for safeguards and transparency to prevent unknowable circumstances from emerging in the future. While this may seem a more distant and perhaps fanciful threat than some of the others covered here, it's certainly not one that can be ignored. Ultimately, fear alone is not a strategy. While it is vital to acknowledge and address the risks of AI, it is equally important to focus on building the safeguards, governance frameworks, and ethical guidelines that can steer this technology toward positive outcomes. By confronting these fears with informed action, we can shape a future where AI serves humanity rather than threatens it.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store